Test Report: Docker_Linux_crio_arm64 21833

                    
                      839ba12bf3f470fdbddc75955152cc8402fc5889:2025-11-01:42154
                    
                

Test fail (40/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.74
35 TestAddons/parallel/Registry 15.5
36 TestAddons/parallel/RegistryCreds 0.5
37 TestAddons/parallel/Ingress 146.01
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 6.39
41 TestAddons/parallel/CSI 46.02
42 TestAddons/parallel/Headlamp 3.37
43 TestAddons/parallel/CloudSpanner 6.35
44 TestAddons/parallel/LocalPath 8.42
45 TestAddons/parallel/NvidiaDevicePlugin 6.27
46 TestAddons/parallel/Yakd 5.34
97 TestFunctional/parallel/ServiceCmdConnect 603.52
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.94
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
135 TestFunctional/parallel/ServiceCmd/Format 0.4
136 TestFunctional/parallel/ServiceCmd/URL 0.41
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.81
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.46
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
177 TestMultiControlPlane/serial/RestartCluster 489.29
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.29
179 TestMultiControlPlane/serial/AddSecondaryNode 2.18
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2.23
191 TestJSONOutput/pause/Command 1.64
197 TestJSONOutput/unpause/Command 2.05
281 TestPause/serial/Pause 8.56
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.5
303 TestStartStop/group/old-k8s-version/serial/Pause 9.54
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.65
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.11
321 TestStartStop/group/no-preload/serial/Pause 6.65
327 TestStartStop/group/embed-certs/serial/Pause 8.06
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.36
339 TestStartStop/group/newest-cni/serial/Pause 7.53
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.46
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.51
x
+
TestAddons/serial/Volcano (0.74s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable volcano --alsologtostderr -v=1: exit status 11 (741.775372ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:33.578300  293873 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:33.580147  293873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:33.580165  293873 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:33.580182  293873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:33.580630  293873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:31:33.581057  293873 mustload.go:66] Loading cluster: addons-720971
	I1101 09:31:33.581895  293873 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:33.581919  293873 addons.go:607] checking whether the cluster is paused
	I1101 09:31:33.582172  293873 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:33.582243  293873 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:31:33.587050  293873 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:31:33.630846  293873 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:33.630905  293873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:31:33.650612  293873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:31:33.762755  293873 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:33.762868  293873 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:33.798499  293873 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:31:33.798522  293873 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:31:33.798527  293873 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:31:33.798531  293873 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:31:33.798534  293873 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:31:33.798538  293873 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:31:33.798541  293873 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:31:33.798544  293873 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:31:33.798547  293873 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:31:33.798555  293873 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:31:33.798560  293873 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:31:33.798566  293873 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:31:33.798570  293873 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:31:33.798573  293873 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:31:33.798577  293873 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:31:33.798582  293873 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:31:33.798589  293873 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:31:33.798594  293873 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:31:33.798597  293873 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:31:33.798600  293873 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:31:33.798604  293873 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:31:33.798610  293873 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:31:33.798613  293873 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:31:33.798616  293873 cri.go:89] found id: ""
	I1101 09:31:33.798678  293873 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:33.815414  293873 out.go:203] 
	W1101 09:31:33.818287  293873 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:33.818313  293873 out.go:285] * 
	* 
	W1101 09:31:34.221551  293873 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:34.224533  293873 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.74s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.406127ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003672681s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003951789s
addons_test.go:392: (dbg) Run:  kubectl --context addons-720971 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-720971 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-720971 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.955175818s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 ip
2025/11/01 09:31:59 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable registry --alsologtostderr -v=1: exit status 11 (270.684004ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:59.777117  294814 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:59.778261  294814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:59.778328  294814 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:59.778351  294814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:59.779321  294814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:31:59.779779  294814 mustload.go:66] Loading cluster: addons-720971
	I1101 09:31:59.780516  294814 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:59.780540  294814 addons.go:607] checking whether the cluster is paused
	I1101 09:31:59.780695  294814 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:59.780717  294814 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:31:59.781411  294814 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:31:59.800484  294814 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:59.800545  294814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:31:59.818872  294814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:31:59.924823  294814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:59.924908  294814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:59.955306  294814 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:31:59.955337  294814 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:31:59.955343  294814 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:31:59.955348  294814 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:31:59.955351  294814 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:31:59.955355  294814 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:31:59.955358  294814 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:31:59.955362  294814 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:31:59.955365  294814 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:31:59.955371  294814 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:31:59.955375  294814 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:31:59.955379  294814 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:31:59.955388  294814 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:31:59.955391  294814 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:31:59.955395  294814 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:31:59.955409  294814 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:31:59.955418  294814 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:31:59.955423  294814 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:31:59.955429  294814 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:31:59.955432  294814 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:31:59.955437  294814 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:31:59.955441  294814 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:31:59.955444  294814 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:31:59.955447  294814 cri.go:89] found id: ""
	I1101 09:31:59.955502  294814 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:59.971002  294814 out.go:203] 
	W1101 09:31:59.973987  294814 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:59.974017  294814 out.go:285] * 
	* 
	W1101 09:31:59.980510  294814 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:59.983589  294814 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.50s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.5s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.268041ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-720971
addons_test.go:332: (dbg) Run:  kubectl --context addons-720971 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (269.084931ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:32:34.159358  295915 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:32:34.160154  295915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:34.160171  295915 out.go:374] Setting ErrFile to fd 2...
	I1101 09:32:34.160177  295915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:34.160553  295915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:32:34.160894  295915 mustload.go:66] Loading cluster: addons-720971
	I1101 09:32:34.161284  295915 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:34.161303  295915 addons.go:607] checking whether the cluster is paused
	I1101 09:32:34.161411  295915 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:34.161426  295915 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:32:34.162042  295915 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:32:34.186265  295915 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:34.186329  295915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:32:34.206501  295915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:32:34.312030  295915 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:34.312115  295915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:34.344662  295915 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:32:34.344685  295915 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:32:34.344691  295915 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:32:34.344695  295915 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:32:34.344699  295915 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:32:34.344703  295915 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:32:34.344707  295915 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:32:34.344710  295915 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:32:34.344713  295915 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:32:34.344719  295915 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:32:34.344722  295915 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:32:34.344726  295915 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:32:34.344729  295915 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:32:34.344732  295915 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:32:34.344740  295915 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:32:34.344745  295915 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:32:34.344751  295915 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:32:34.344755  295915 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:32:34.344758  295915 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:32:34.344761  295915 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:32:34.344765  295915 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:32:34.344769  295915 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:32:34.344772  295915 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:32:34.344775  295915 cri.go:89] found id: ""
	I1101 09:32:34.344826  295915 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:32:34.360200  295915 out.go:203] 
	W1101 09:32:34.363162  295915 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:32:34.363182  295915 out.go:285] * 
	* 
	W1101 09:32:34.369606  295915 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:32:34.372474  295915 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-720971 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-720971 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-720971 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2fd9e700-5b3c-47c8-a359-56fb861441db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2fd9e700-5b3c-47c8-a359-56fb861441db] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005354928s
I1101 09:32:24.281408  287135 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.167192869s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-720971 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-720971
helpers_test.go:243: (dbg) docker inspect addons-720971:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897",
	        "Created": "2025-11-01T09:29:10.230050376Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288288,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:29:10.289473763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/hostname",
	        "HostsPath": "/var/lib/docker/containers/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/hosts",
	        "LogPath": "/var/lib/docker/containers/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897-json.log",
	        "Name": "/addons-720971",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-720971:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-720971",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897",
	                "LowerDir": "/var/lib/docker/overlay2/d286f68b4f28ed1023c7f5e9bd2c2e248a7ae7cb8d0f1d21e3a2a542eb849ea7-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d286f68b4f28ed1023c7f5e9bd2c2e248a7ae7cb8d0f1d21e3a2a542eb849ea7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d286f68b4f28ed1023c7f5e9bd2c2e248a7ae7cb8d0f1d21e3a2a542eb849ea7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d286f68b4f28ed1023c7f5e9bd2c2e248a7ae7cb8d0f1d21e3a2a542eb849ea7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-720971",
	                "Source": "/var/lib/docker/volumes/addons-720971/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-720971",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-720971",
	                "name.minikube.sigs.k8s.io": "addons-720971",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8851979d0d22902f3cc4de6b037d1dfce977e54cb644d4edd54282862ae106ba",
	            "SandboxKey": "/var/run/docker/netns/8851979d0d22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-720971": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:01:6c:24:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5119f53304fa3253f3af8591ad05d5f56f09adc085fd05368b53e67c3ff3a7b",
	                    "EndpointID": "91180c4a56e50651e273def5c46a2c4ce882c462dfa5479f46dd306f3b137b94",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-720971",
	                        "490d904a357f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-720971 -n addons-720971
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-720971 logs -n 25: (1.510689927s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-812096                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-812096 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ start   │ --download-only -p binary-mirror-960233 --alsologtostderr --binary-mirror http://127.0.0.1:36239 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-960233   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p binary-mirror-960233                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-960233   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ addons  │ enable dashboard -p addons-720971                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-720971                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ start   │ -p addons-720971 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ addons-720971 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-720971 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-720971 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-720971 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ ip      │ addons-720971 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ addons-720971 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-720971 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ addons  │ addons-720971 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ ssh     │ addons-720971 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ addons  │ addons-720971 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ addons  │ addons-720971 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-720971                                                                                                                                                                                                                                                                                                                                                                                           │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │ 01 Nov 25 09:32 UTC │
	│ addons  │ addons-720971 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ addons  │ addons-720971 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ addons  │ addons-720971 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ ssh     │ addons-720971 ssh cat /opt/local-path-provisioner/pvc-13036f40-77fc-479b-8d89-adac40366789_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │ 01 Nov 25 09:32 UTC │
	│ addons  │ addons-720971 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ addons  │ addons-720971 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ ip      │ addons-720971 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:28:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:28:43.703595  287891 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:28:43.704151  287891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:43.704194  287891 out.go:374] Setting ErrFile to fd 2...
	I1101 09:28:43.704218  287891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:43.704543  287891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:28:43.705076  287891 out.go:368] Setting JSON to false
	I1101 09:28:43.705990  287891 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4273,"bootTime":1761985051,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:28:43.706095  287891 start.go:143] virtualization:  
	I1101 09:28:43.709314  287891 out.go:179] * [addons-720971] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:28:43.713167  287891 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:28:43.713253  287891 notify.go:221] Checking for updates...
	I1101 09:28:43.719125  287891 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:28:43.721896  287891 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:28:43.724669  287891 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:28:43.727695  287891 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:28:43.730618  287891 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:28:43.733810  287891 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:28:43.755264  287891 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:28:43.755403  287891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:43.818052  287891 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 09:28:43.809315097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:28:43.818164  287891 docker.go:319] overlay module found
	I1101 09:28:43.821322  287891 out.go:179] * Using the docker driver based on user configuration
	I1101 09:28:43.824181  287891 start.go:309] selected driver: docker
	I1101 09:28:43.824201  287891 start.go:930] validating driver "docker" against <nil>
	I1101 09:28:43.824215  287891 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:28:43.824902  287891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:43.886668  287891 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 09:28:43.876878567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:28:43.886825  287891 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:28:43.887054  287891 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:28:43.889927  287891 out.go:179] * Using Docker driver with root privileges
	I1101 09:28:43.892752  287891 cni.go:84] Creating CNI manager for ""
	I1101 09:28:43.892817  287891 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:28:43.892831  287891 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:28:43.892914  287891 start.go:353] cluster config:
	{Name:addons-720971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-720971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 09:28:43.896080  287891 out.go:179] * Starting "addons-720971" primary control-plane node in "addons-720971" cluster
	I1101 09:28:43.898959  287891 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:28:43.901942  287891 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:28:43.904803  287891 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:28:43.904862  287891 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:28:43.904874  287891 cache.go:59] Caching tarball of preloaded images
	I1101 09:28:43.904884  287891 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:28:43.904971  287891 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:28:43.904981  287891 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:28:43.905318  287891 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/config.json ...
	I1101 09:28:43.905337  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/config.json: {Name:mk964ea0c7b731f415496ba07e2cc0c6bc626b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:43.919803  287891 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:28:43.919932  287891 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:28:43.919958  287891 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 09:28:43.919962  287891 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 09:28:43.919971  287891 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 09:28:43.919976  287891 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 09:29:01.846282  287891 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 09:29:01.846318  287891 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:29:01.846347  287891 start.go:360] acquireMachinesLock for addons-720971: {Name:mkda075e3a51e16fadb53ae3d5bd1928997b2eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:29:01.847142  287891 start.go:364] duration metric: took 772.1µs to acquireMachinesLock for "addons-720971"
	I1101 09:29:01.847177  287891 start.go:93] Provisioning new machine with config: &{Name:addons-720971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-720971 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:29:01.847270  287891 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:29:01.850641  287891 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 09:29:01.850867  287891 start.go:159] libmachine.API.Create for "addons-720971" (driver="docker")
	I1101 09:29:01.850901  287891 client.go:173] LocalClient.Create starting
	I1101 09:29:01.851017  287891 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem
	I1101 09:29:02.328580  287891 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem
	I1101 09:29:03.485013  287891 cli_runner.go:164] Run: docker network inspect addons-720971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:29:03.501195  287891 cli_runner.go:211] docker network inspect addons-720971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:29:03.501293  287891 network_create.go:284] running [docker network inspect addons-720971] to gather additional debugging logs...
	I1101 09:29:03.501319  287891 cli_runner.go:164] Run: docker network inspect addons-720971
	W1101 09:29:03.518809  287891 cli_runner.go:211] docker network inspect addons-720971 returned with exit code 1
	I1101 09:29:03.518840  287891 network_create.go:287] error running [docker network inspect addons-720971]: docker network inspect addons-720971: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-720971 not found
	I1101 09:29:03.518855  287891 network_create.go:289] output of [docker network inspect addons-720971]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-720971 not found
	
	** /stderr **
	I1101 09:29:03.518950  287891 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:29:03.534841  287891 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d86470}
	I1101 09:29:03.534882  287891 network_create.go:124] attempt to create docker network addons-720971 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 09:29:03.534947  287891 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-720971 addons-720971
	I1101 09:29:03.592536  287891 network_create.go:108] docker network addons-720971 192.168.49.0/24 created
	I1101 09:29:03.592572  287891 kic.go:121] calculated static IP "192.168.49.2" for the "addons-720971" container
	I1101 09:29:03.592644  287891 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:29:03.606786  287891 cli_runner.go:164] Run: docker volume create addons-720971 --label name.minikube.sigs.k8s.io=addons-720971 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:29:03.624649  287891 oci.go:103] Successfully created a docker volume addons-720971
	I1101 09:29:03.624740  287891 cli_runner.go:164] Run: docker run --rm --name addons-720971-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-720971 --entrypoint /usr/bin/test -v addons-720971:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:29:05.750113  287891 cli_runner.go:217] Completed: docker run --rm --name addons-720971-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-720971 --entrypoint /usr/bin/test -v addons-720971:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.125329896s)
	I1101 09:29:05.750145  287891 oci.go:107] Successfully prepared a docker volume addons-720971
	I1101 09:29:05.750178  287891 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:05.750205  287891 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:29:05.750271  287891 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-720971:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:29:10.149410  287891 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-720971:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.399097549s)
	I1101 09:29:10.149463  287891 kic.go:203] duration metric: took 4.399248675s to extract preloaded images to volume ...
	W1101 09:29:10.149600  287891 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:29:10.149735  287891 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:29:10.214591  287891 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-720971 --name addons-720971 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-720971 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-720971 --network addons-720971 --ip 192.168.49.2 --volume addons-720971:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:29:10.509061  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Running}}
	I1101 09:29:10.528922  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:10.550590  287891 cli_runner.go:164] Run: docker exec addons-720971 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:29:10.602050  287891 oci.go:144] the created container "addons-720971" has a running status.
	I1101 09:29:10.602079  287891 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa...
	I1101 09:29:11.449205  287891 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:29:11.482302  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:11.498401  287891 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:29:11.498424  287891 kic_runner.go:114] Args: [docker exec --privileged addons-720971 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:29:11.541807  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:11.560503  287891 machine.go:94] provisionDockerMachine start ...
	I1101 09:29:11.560606  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:11.577015  287891 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:11.577334  287891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1101 09:29:11.577344  287891 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:29:11.577928  287891 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53400->127.0.0.1:33139: read: connection reset by peer
	I1101 09:29:14.725273  287891 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-720971
	
	I1101 09:29:14.725298  287891 ubuntu.go:182] provisioning hostname "addons-720971"
	I1101 09:29:14.725364  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:14.742620  287891 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:14.742952  287891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1101 09:29:14.742969  287891 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-720971 && echo "addons-720971" | sudo tee /etc/hostname
	I1101 09:29:14.898796  287891 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-720971
	
	I1101 09:29:14.898881  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:14.918020  287891 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:14.918322  287891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1101 09:29:14.918343  287891 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-720971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-720971/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-720971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:29:15.070212  287891 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:29:15.070238  287891 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:29:15.070264  287891 ubuntu.go:190] setting up certificates
	I1101 09:29:15.070275  287891 provision.go:84] configureAuth start
	I1101 09:29:15.070338  287891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-720971
	I1101 09:29:15.088811  287891 provision.go:143] copyHostCerts
	I1101 09:29:15.088904  287891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:29:15.089040  287891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:29:15.089107  287891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:29:15.089165  287891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.addons-720971 san=[127.0.0.1 192.168.49.2 addons-720971 localhost minikube]
	I1101 09:29:15.505475  287891 provision.go:177] copyRemoteCerts
	I1101 09:29:15.505545  287891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:29:15.505589  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:15.523685  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:15.629731  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:29:15.647433  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:29:15.665051  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:29:15.682514  287891 provision.go:87] duration metric: took 612.224341ms to configureAuth
	I1101 09:29:15.682542  287891 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:29:15.682766  287891 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:15.682878  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:15.699734  287891 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:15.700040  287891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1101 09:29:15.700061  287891 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:29:15.953308  287891 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:29:15.953333  287891 machine.go:97] duration metric: took 4.392806447s to provisionDockerMachine
	I1101 09:29:15.953343  287891 client.go:176] duration metric: took 14.102432735s to LocalClient.Create
	I1101 09:29:15.953356  287891 start.go:167] duration metric: took 14.102490583s to libmachine.API.Create "addons-720971"
	I1101 09:29:15.953363  287891 start.go:293] postStartSetup for "addons-720971" (driver="docker")
	I1101 09:29:15.953374  287891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:29:15.953440  287891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:29:15.953490  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:15.970620  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:16.078247  287891 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:29:16.081798  287891 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:29:16.081829  287891 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:29:16.081844  287891 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:29:16.081930  287891 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:29:16.081959  287891 start.go:296] duration metric: took 128.589353ms for postStartSetup
	I1101 09:29:16.082285  287891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-720971
	I1101 09:29:16.099481  287891 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/config.json ...
	I1101 09:29:16.099770  287891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:29:16.099824  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:16.116654  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:16.222725  287891 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:29:16.227326  287891 start.go:128] duration metric: took 14.380039259s to createHost
	I1101 09:29:16.227354  287891 start.go:83] releasing machines lock for "addons-720971", held for 14.380196359s
	I1101 09:29:16.227427  287891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-720971
	I1101 09:29:16.244362  287891 ssh_runner.go:195] Run: cat /version.json
	I1101 09:29:16.244431  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:16.244685  287891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:29:16.244747  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:16.265957  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:16.268831  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:16.369519  287891 ssh_runner.go:195] Run: systemctl --version
	I1101 09:29:16.460628  287891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:29:16.497411  287891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:29:16.502100  287891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:29:16.502190  287891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:29:16.531382  287891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:29:16.531458  287891 start.go:496] detecting cgroup driver to use...
	I1101 09:29:16.531506  287891 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:29:16.531570  287891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:29:16.548159  287891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:29:16.561235  287891 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:29:16.561319  287891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:29:16.578966  287891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:29:16.597250  287891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:29:16.708353  287891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:29:16.831964  287891 docker.go:234] disabling docker service ...
	I1101 09:29:16.832084  287891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:29:16.853099  287891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:29:16.866038  287891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:29:16.974542  287891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:29:17.098053  287891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:29:17.110998  287891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:29:17.125115  287891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:29:17.125181  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.133903  287891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:29:17.133968  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.142822  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.151622  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.160659  287891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:29:17.168771  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.177139  287891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.189946  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.198280  287891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:29:17.205835  287891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:29:17.213035  287891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:17.323636  287891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:29:17.446733  287891 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:29:17.446891  287891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:29:17.450960  287891 start.go:564] Will wait 60s for crictl version
	I1101 09:29:17.451070  287891 ssh_runner.go:195] Run: which crictl
	I1101 09:29:17.454461  287891 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:29:17.479480  287891 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:29:17.479633  287891 ssh_runner.go:195] Run: crio --version
	I1101 09:29:17.512441  287891 ssh_runner.go:195] Run: crio --version
	I1101 09:29:17.542824  287891 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:29:17.545635  287891 cli_runner.go:164] Run: docker network inspect addons-720971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:29:17.561748  287891 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:29:17.565677  287891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:29:17.575955  287891 kubeadm.go:884] updating cluster {Name:addons-720971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-720971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:29:17.576078  287891 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:17.576137  287891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:29:17.612405  287891 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:29:17.612429  287891 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:29:17.612483  287891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:29:17.636855  287891 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:29:17.636879  287891 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:29:17.636887  287891 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:29:17.636970  287891 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-720971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-720971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:29:17.637052  287891 ssh_runner.go:195] Run: crio config
	I1101 09:29:17.708167  287891 cni.go:84] Creating CNI manager for ""
	I1101 09:29:17.708189  287891 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:29:17.708208  287891 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:29:17.708233  287891 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-720971 NodeName:addons-720971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:29:17.708369  287891 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-720971"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:29:17.708442  287891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:29:17.715897  287891 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:29:17.715966  287891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:29:17.723547  287891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:29:17.736286  287891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:29:17.749364  287891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1101 09:29:17.761979  287891 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:29:17.765311  287891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:29:17.774401  287891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:17.890507  287891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:29:17.907207  287891 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971 for IP: 192.168.49.2
	I1101 09:29:17.907278  287891 certs.go:195] generating shared ca certs ...
	I1101 09:29:17.907309  287891 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:17.907470  287891 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:29:18.489440  287891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt ...
	I1101 09:29:18.489475  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt: {Name:mk898dc43af82dfa9231d0fc36cb33f84849bbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:18.489682  287891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key ...
	I1101 09:29:18.489713  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key: {Name:mka703e411a1c87bad1de809149144253920e01f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:18.489813  287891 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:29:18.894654  287891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt ...
	I1101 09:29:18.894685  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt: {Name:mkc9777fdbfd77c8972d0c36c45bdb2e6f0cac10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:18.895558  287891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key ...
	I1101 09:29:18.895578  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key: {Name:mk68326f0acf23235e1be5f28012de152996722a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:18.895667  287891 certs.go:257] generating profile certs ...
	I1101 09:29:18.895728  287891 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.key
	I1101 09:29:18.895745  287891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt with IP's: []
	I1101 09:29:19.255672  287891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt ...
	I1101 09:29:19.255705  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: {Name:mk594dc5e6a47adfd22abde413a6bc58a616786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:19.256512  287891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.key ...
	I1101 09:29:19.256529  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.key: {Name:mk4a7c8ac0a94db5b133b743f0a9e3cc97090ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:19.257270  287891 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key.0b65546a
	I1101 09:29:19.257301  287891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt.0b65546a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 09:29:19.551187  287891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt.0b65546a ...
	I1101 09:29:19.551220  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt.0b65546a: {Name:mk67928a98e2c7e5fa55dadde3e91a337b63d08f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:19.551408  287891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key.0b65546a ...
	I1101 09:29:19.551422  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key.0b65546a: {Name:mk12924766dcee21229beb49a3ba49a59e57dc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:19.552111  287891 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt.0b65546a -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt
	I1101 09:29:19.552200  287891 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key.0b65546a -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key
	I1101 09:29:19.552258  287891 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.key
	I1101 09:29:19.552281  287891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.crt with IP's: []
	I1101 09:29:20.038963  287891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.crt ...
	I1101 09:29:20.038996  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.crt: {Name:mk26ce0a562ab7b4a5540e2d463ef07ef7e2ee37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:20.039854  287891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.key ...
	I1101 09:29:20.039875  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.key: {Name:mk0a7ad61a66a7bb7bbefb5ac9cdaac9e341c325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:20.040723  287891 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:29:20.040770  287891 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:29:20.040801  287891 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:29:20.040828  287891 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:29:20.041392  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:29:20.061507  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:29:20.081613  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:29:20.100971  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:29:20.119229  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:29:20.137847  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:29:20.157728  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:29:20.178134  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:29:20.196349  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:29:20.215958  287891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:29:20.228918  287891 ssh_runner.go:195] Run: openssl version
	I1101 09:29:20.235491  287891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:29:20.244038  287891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:20.247878  287891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:20.247943  287891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:20.288815  287891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:29:20.296929  287891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:29:20.300295  287891 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:29:20.300345  287891 kubeadm.go:401] StartCluster: {Name:addons-720971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-720971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:29:20.300426  287891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:29:20.300491  287891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:29:20.328443  287891 cri.go:89] found id: ""
	I1101 09:29:20.328525  287891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:29:20.336165  287891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:29:20.343830  287891 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:29:20.343950  287891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:29:20.351742  287891 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:29:20.351767  287891 kubeadm.go:158] found existing configuration files:
	
	I1101 09:29:20.351817  287891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:29:20.359528  287891 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:29:20.359642  287891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:29:20.366734  287891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:29:20.373979  287891 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:29:20.374042  287891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:29:20.381193  287891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:29:20.388590  287891 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:29:20.388664  287891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:29:20.395756  287891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:29:20.403183  287891 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:29:20.403297  287891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:29:20.410375  287891 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:29:20.448409  287891 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:29:20.448511  287891 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:29:20.478838  287891 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:29:20.478917  287891 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:29:20.478958  287891 kubeadm.go:319] OS: Linux
	I1101 09:29:20.479010  287891 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:29:20.479067  287891 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:29:20.479120  287891 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:29:20.479174  287891 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:29:20.479228  287891 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:29:20.479281  287891 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:29:20.479332  287891 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:29:20.479393  287891 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:29:20.479445  287891 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:29:20.542861  287891 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:29:20.543060  287891 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:29:20.543200  287891 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:29:20.550410  287891 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:29:20.554687  287891 out.go:252]   - Generating certificates and keys ...
	I1101 09:29:20.554880  287891 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:29:20.555019  287891 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:29:21.068063  287891 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:29:21.414777  287891 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:29:21.838592  287891 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:29:22.356176  287891 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:29:22.721627  287891 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:29:22.722044  287891 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-720971 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:29:23.072337  287891 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:29:23.072676  287891 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-720971 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:29:23.258085  287891 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:29:23.674734  287891 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:29:23.910846  287891 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:29:23.911389  287891 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:29:24.469718  287891 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:29:24.610830  287891 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:29:25.256448  287891 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:29:26.202512  287891 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:29:26.607998  287891 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:29:26.608616  287891 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:29:26.613560  287891 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:29:26.616881  287891 out.go:252]   - Booting up control plane ...
	I1101 09:29:26.616996  287891 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:29:26.617087  287891 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:29:26.617935  287891 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:29:26.633077  287891 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:29:26.633624  287891 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:29:26.641190  287891 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:29:26.641509  287891 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:29:26.641558  287891 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:29:26.766136  287891 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:29:26.766260  287891 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:29:28.263928  287891 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501631414s
	I1101 09:29:28.268262  287891 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:29:28.268362  287891 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 09:29:28.268619  287891 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:29:28.268708  287891 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:29:32.038647  287891 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.769187537s
	I1101 09:29:34.106338  287891 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.837372272s
	I1101 09:29:34.770680  287891 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501483318s
	I1101 09:29:34.789900  287891 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:29:34.807446  287891 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:29:34.829183  287891 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:29:34.829439  287891 kubeadm.go:319] [mark-control-plane] Marking the node addons-720971 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:29:34.851487  287891 kubeadm.go:319] [bootstrap-token] Using token: s773yf.tbd4dhvjfsergipt
	I1101 09:29:34.854762  287891 out.go:252]   - Configuring RBAC rules ...
	I1101 09:29:34.854894  287891 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:29:34.861136  287891 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:29:34.869685  287891 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:29:34.877093  287891 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:29:34.881394  287891 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:29:34.887263  287891 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:29:35.178240  287891 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:29:35.608069  287891 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:29:36.180510  287891 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:29:36.181463  287891 kubeadm.go:319] 
	I1101 09:29:36.181536  287891 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:29:36.181542  287891 kubeadm.go:319] 
	I1101 09:29:36.181623  287891 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:29:36.181627  287891 kubeadm.go:319] 
	I1101 09:29:36.181653  287891 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:29:36.181726  287891 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:29:36.181782  287891 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:29:36.181786  287891 kubeadm.go:319] 
	I1101 09:29:36.181842  287891 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:29:36.181847  287891 kubeadm.go:319] 
	I1101 09:29:36.181897  287891 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:29:36.181901  287891 kubeadm.go:319] 
	I1101 09:29:36.181956  287891 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:29:36.182033  287891 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:29:36.182111  287891 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:29:36.182117  287891 kubeadm.go:319] 
	I1101 09:29:36.182204  287891 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:29:36.182284  287891 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:29:36.182289  287891 kubeadm.go:319] 
	I1101 09:29:36.182376  287891 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s773yf.tbd4dhvjfsergipt \
	I1101 09:29:36.182483  287891 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 09:29:36.182507  287891 kubeadm.go:319] 	--control-plane 
	I1101 09:29:36.182511  287891 kubeadm.go:319] 
	I1101 09:29:36.182611  287891 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:29:36.182617  287891 kubeadm.go:319] 
	I1101 09:29:36.182702  287891 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s773yf.tbd4dhvjfsergipt \
	I1101 09:29:36.182808  287891 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 09:29:36.186434  287891 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 09:29:36.186674  287891 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:29:36.186787  287891 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:29:36.186806  287891 cni.go:84] Creating CNI manager for ""
	I1101 09:29:36.186814  287891 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:29:36.191906  287891 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:29:36.194800  287891 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:29:36.198724  287891 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:29:36.198745  287891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:29:36.212443  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:29:36.511063  287891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:29:36.511218  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:36.511285  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-720971 minikube.k8s.io/updated_at=2025_11_01T09_29_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=addons-720971 minikube.k8s.io/primary=true
	I1101 09:29:36.656824  287891 ops.go:34] apiserver oom_adj: -16
	I1101 09:29:36.656926  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:37.157224  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:37.658033  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:38.157873  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:38.657039  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:39.157430  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:39.657135  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:40.157102  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:40.657512  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:41.157066  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:41.280174  287891 kubeadm.go:1114] duration metric: took 4.769019595s to wait for elevateKubeSystemPrivileges
	I1101 09:29:41.280208  287891 kubeadm.go:403] duration metric: took 20.97986616s to StartCluster
	I1101 09:29:41.280227  287891 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:41.280958  287891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:29:41.281426  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:41.281625  287891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:29:41.281656  287891 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:29:41.281912  287891 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:41.281943  287891 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:29:41.282020  287891 addons.go:70] Setting yakd=true in profile "addons-720971"
	I1101 09:29:41.282035  287891 addons.go:239] Setting addon yakd=true in "addons-720971"
	I1101 09:29:41.282057  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.282117  287891 addons.go:70] Setting inspektor-gadget=true in profile "addons-720971"
	I1101 09:29:41.282140  287891 addons.go:239] Setting addon inspektor-gadget=true in "addons-720971"
	I1101 09:29:41.282162  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.282513  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.282572  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.282947  287891 addons.go:70] Setting metrics-server=true in profile "addons-720971"
	I1101 09:29:41.282970  287891 addons.go:239] Setting addon metrics-server=true in "addons-720971"
	I1101 09:29:41.282993  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.283440  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.286113  287891 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-720971"
	I1101 09:29:41.286366  287891 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-720971"
	I1101 09:29:41.286436  287891 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-720971"
	I1101 09:29:41.286450  287891 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-720971"
	I1101 09:29:41.286476  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.286926  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.287182  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.288265  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.286269  287891 addons.go:70] Setting cloud-spanner=true in profile "addons-720971"
	I1101 09:29:41.286278  287891 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-720971"
	I1101 09:29:41.286282  287891 addons.go:70] Setting default-storageclass=true in profile "addons-720971"
	I1101 09:29:41.286286  287891 addons.go:70] Setting gcp-auth=true in profile "addons-720971"
	I1101 09:29:41.286289  287891 addons.go:70] Setting ingress=true in profile "addons-720971"
	I1101 09:29:41.286293  287891 addons.go:70] Setting ingress-dns=true in profile "addons-720971"
	I1101 09:29:41.292052  287891 out.go:179] * Verifying Kubernetes components...
	I1101 09:29:41.297159  287891 addons.go:70] Setting registry=true in profile "addons-720971"
	I1101 09:29:41.297279  287891 addons.go:239] Setting addon registry=true in "addons-720971"
	I1101 09:29:41.297320  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.297190  287891 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-720971"
	I1101 09:29:41.297930  287891 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-720971"
	I1101 09:29:41.298288  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.298560  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.297175  287891 addons.go:70] Setting registry-creds=true in profile "addons-720971"
	I1101 09:29:41.314586  287891 addons.go:239] Setting addon registry-creds=true in "addons-720971"
	I1101 09:29:41.314628  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.315089  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.297184  287891 addons.go:70] Setting storage-provisioner=true in profile "addons-720971"
	I1101 09:29:41.316136  287891 addons.go:239] Setting addon storage-provisioner=true in "addons-720971"
	I1101 09:29:41.316166  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.316596  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.314481  287891 addons.go:70] Setting volcano=true in profile "addons-720971"
	I1101 09:29:41.321766  287891 addons.go:239] Setting addon volcano=true in "addons-720971"
	I1101 09:29:41.321808  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.322267  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.328122  287891 addons.go:239] Setting addon cloud-spanner=true in "addons-720971"
	I1101 09:29:41.328174  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.328647  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.314498  287891 addons.go:70] Setting volumesnapshots=true in profile "addons-720971"
	I1101 09:29:41.341876  287891 addons.go:239] Setting addon volumesnapshots=true in "addons-720971"
	I1101 09:29:41.341916  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.342390  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.350695  287891 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-720971"
	I1101 09:29:41.350877  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.351780  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.370787  287891 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-720971"
	I1101 09:29:41.371332  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.393974  287891 mustload.go:66] Loading cluster: addons-720971
	I1101 09:29:41.394221  287891 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:41.394532  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.415363  287891 addons.go:239] Setting addon ingress=true in "addons-720971"
	I1101 09:29:41.415453  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.416024  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.443831  287891 addons.go:239] Setting addon ingress-dns=true in "addons-720971"
	I1101 09:29:41.443897  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.444453  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.494249  287891 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:29:41.520109  287891 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:29:41.524076  287891 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:29:41.524148  287891 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:29:41.524267  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.534215  287891 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:29:41.537514  287891 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:29:41.537536  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:29:41.537607  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.556990  287891 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 09:29:41.559173  287891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:41.559273  287891 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:29:41.563441  287891 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:29:41.564493  287891 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:29:41.564513  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:29:41.564579  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.564763  287891 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:29:41.564773  287891 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:29:41.564809  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.577095  287891 addons.go:239] Setting addon default-storageclass=true in "addons-720971"
	I1101 09:29:41.577136  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.577748  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.580298  287891 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:29:41.580318  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:29:41.580381  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.585923  287891 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:29:41.586016  287891 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:29:41.586390  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.591514  287891 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:29:41.591785  287891 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:29:41.596012  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:29:41.596179  287891 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:29:41.599300  287891 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-720971"
	I1101 09:29:41.599343  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.599759  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.615692  287891 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:29:41.615717  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:29:41.615774  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.632297  287891 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 09:29:41.634221  287891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:29:41.634665  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.635520  287891 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:29:41.635538  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:29:41.635638  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.636431  287891 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:29:41.639386  287891 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:29:41.639421  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:29:41.639485  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.648066  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:29:41.648089  287891 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:29:41.648169  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.659826  287891 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:41.663599  287891 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:41.666623  287891 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:29:41.666645  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:29:41.666709  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.689164  287891 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:29:41.695431  287891 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:29:41.695459  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:29:41.695527  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	W1101 09:29:41.713722  287891 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:29:41.714936  287891 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:29:41.714953  287891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:29:41.715020  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.733386  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:29:41.736511  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:29:41.741325  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:29:41.744140  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:29:41.750339  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:29:41.753996  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.754921  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.759503  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:29:41.762371  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:29:41.804909  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:29:41.810026  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.811953  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:29:41.812027  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:29:41.812128  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.821957  287891 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:29:41.826101  287891 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:29:41.829942  287891 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:29:41.830020  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:29:41.830129  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.852364  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.852819  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.853820  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.881481  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.891710  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.895862  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.929858  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.930022  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.939853  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	W1101 09:29:41.952405  287891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:29:41.952441  287891 retry.go:31] will retry after 170.670867ms: ssh: handshake failed: EOF
	I1101 09:29:41.955440  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.966579  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.969995  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	W1101 09:29:41.972748  287891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:29:41.972770  287891 retry.go:31] will retry after 180.891157ms: ssh: handshake failed: EOF
	I1101 09:29:42.074669  287891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1101 09:29:42.155038  287891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:29:42.155133  287891 retry.go:31] will retry after 494.620939ms: ssh: handshake failed: EOF
	I1101 09:29:42.422795  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:29:42.500023  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:29:42.506637  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:29:42.538630  287891 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:29:42.538655  287891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:29:42.591941  287891 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:29:42.591964  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:29:42.606057  287891 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:29:42.606082  287891 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:29:42.623995  287891 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:29:42.624021  287891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:29:42.638371  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:29:42.707525  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:29:42.710221  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:29:42.711502  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:29:42.713495  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:29:42.725913  287891 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:29:42.725938  287891 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:29:42.733754  287891 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:29:42.733780  287891 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:29:42.763328  287891 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:42.763352  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:29:42.765618  287891 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:29:42.765639  287891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:29:42.770367  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:29:42.779034  287891 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:29:42.779058  287891 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:29:42.884552  287891 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:29:42.884578  287891 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:29:42.887096  287891 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:29:42.887121  287891 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:29:42.901324  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:29:42.901349  287891 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:29:42.931899  287891 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:29:42.931921  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:29:42.933401  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:43.075359  287891 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:29:43.075382  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:29:43.091874  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:29:43.108598  287891 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:29:43.108623  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:29:43.151382  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:29:43.185164  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:29:43.211653  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:29:43.239090  287891 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.164386951s)
	I1101 09:29:43.239846  287891 node_ready.go:35] waiting up to 6m0s for node "addons-720971" to be "Ready" ...
	I1101 09:29:43.240072  287891 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.605817405s)
	I1101 09:29:43.240093  287891 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 09:29:43.397226  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:29:43.397261  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:29:43.517411  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.09457234s)
	I1101 09:29:43.671860  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:29:43.671930  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:29:43.748097  287891 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-720971" context rescaled to 1 replicas
	I1101 09:29:43.778415  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.278352576s)
	I1101 09:29:43.927951  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:29:43.927976  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:29:44.191170  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:29:44.191196  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:29:44.442120  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:29:44.442148  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:29:44.633186  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:29:44.633210  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:29:44.848755  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:29:44.848781  287891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:29:45.077024  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:29:45.077053  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:29:45.203919  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:29:45.203947  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:29:45.215833  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.577424007s)
	I1101 09:29:45.215900  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.508352193s)
	I1101 09:29:45.216170  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.709506378s)
	W1101 09:29:45.243414  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:45.247527  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:29:45.247558  287891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:29:45.337813  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:29:45.411700  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.701443155s)
	I1101 09:29:46.281086  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.569549011s)
	W1101 09:29:47.245870  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:47.422514  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.489079921s)
	W1101 09:29:47.422551  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:47.422570  287891 retry.go:31] will retry after 243.431697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:47.422630  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.330731092s)
	I1101 09:29:47.422646  287891 addons.go:480] Verifying addon metrics-server=true in "addons-720971"
	I1101 09:29:47.422677  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.271271252s)
	I1101 09:29:47.422694  287891 addons.go:480] Verifying addon registry=true in "addons-720971"
	I1101 09:29:47.422865  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.652030813s)
	I1101 09:29:47.423183  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.237987719s)
	I1101 09:29:47.423355  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.211671461s)
	W1101 09:29:47.423389  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:29:47.423405  287891 retry.go:31] will retry after 222.993155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:29:47.423551  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.710033172s)
	I1101 09:29:47.423566  287891 addons.go:480] Verifying addon ingress=true in "addons-720971"
	I1101 09:29:47.426178  287891 out.go:179] * Verifying registry addon...
	I1101 09:29:47.426229  287891 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-720971 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:29:47.428223  287891 out.go:179] * Verifying ingress addon...
	I1101 09:29:47.431693  287891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:29:47.431797  287891 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:29:47.440472  287891 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:29:47.440502  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:47.441008  287891 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:29:47.441030  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:47.647203  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:29:47.658419  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.320555674s)
	I1101 09:29:47.658456  287891 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-720971"
	I1101 09:29:47.662094  287891 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:29:47.665680  287891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:29:47.666088  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:47.686838  287891 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:29:47.686867  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:47.936965  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:47.937101  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:48.179294  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:48.436013  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:48.436535  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:48.669675  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:48.723548  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.057427018s)
	W1101 09:29:48.723582  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:48.723602  287891 retry.go:31] will retry after 561.602325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:48.935868  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:48.936124  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:49.169489  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:49.269070  287891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:29:49.269171  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:49.285777  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:49.287029  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:49.406273  287891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:29:49.422815  287891 addons.go:239] Setting addon gcp-auth=true in "addons-720971"
	I1101 09:29:49.422861  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:49.423317  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:49.437185  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:49.437271  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:49.444200  287891 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:29:49.444257  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:49.462647  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:49.669124  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:49.743165  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:49.936038  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:49.936112  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:50.113229  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:50.113313  287891 retry.go:31] will retry after 836.696112ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:50.117205  287891 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:29:50.120107  287891 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:50.122873  287891 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:29:50.122904  287891 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:29:50.137922  287891 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:29:50.137946  287891 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:29:50.151723  287891 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:29:50.151749  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:29:50.165655  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:29:50.169830  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:50.435848  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:50.436467  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:50.655819  287891 addons.go:480] Verifying addon gcp-auth=true in "addons-720971"
	I1101 09:29:50.659722  287891 out.go:179] * Verifying gcp-auth addon...
	I1101 09:29:50.664709  287891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:29:50.683021  287891 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:29:50.683045  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:50.687414  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:50.935689  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:50.936169  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:50.951177  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:51.170001  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:51.170113  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:51.437050  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:51.437425  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:51.673762  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:51.674449  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:51.764649  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:51.764687  287891 retry.go:31] will retry after 948.865158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:51.935549  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:51.935707  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:52.167894  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:52.168853  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:52.243545  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:52.435604  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:52.435693  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:52.668697  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:52.669495  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:52.714692  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:52.936738  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:52.937078  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:53.170520  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:53.172035  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:53.436769  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:53.437045  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:29:53.523497  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:53.523531  287891 retry.go:31] will retry after 945.858273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:53.669134  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:53.669324  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:53.935357  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:53.935717  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:54.167595  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:54.168379  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:54.435685  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:54.435794  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:54.469986  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:54.672515  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:54.672596  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:54.743666  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:54.936293  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:54.937036  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:55.168386  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:55.168846  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:55.291948  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:55.291987  287891 retry.go:31] will retry after 1.260772996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:55.434811  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:55.435424  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:55.668805  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:55.669311  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:55.935499  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:55.935650  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:56.168446  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:56.168914  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:56.435087  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:56.435231  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:56.553611  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:56.672621  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:56.673355  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:56.935787  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:56.936213  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:57.167773  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:57.168522  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:57.242538  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	W1101 09:29:57.346403  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:57.346472  287891 retry.go:31] will retry after 1.684425992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:57.436101  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:57.436231  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:57.669060  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:57.669419  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:57.935671  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:57.935811  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:58.167667  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:58.168887  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:58.434653  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:58.435043  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:58.668644  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:58.669377  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:58.935489  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:58.935990  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:59.031084  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:59.170307  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:59.170572  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:59.259409  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:59.436983  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:59.437349  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:59.673813  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:59.673957  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:59.858466  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:59.858496  287891 retry.go:31] will retry after 3.168392768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:59.942013  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:59.942258  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:00.191674  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:00.191830  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:00.466376  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:00.469259  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:00.672728  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:00.673135  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:00.935791  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:00.936440  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:01.170147  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:01.171493  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:01.436241  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:01.436465  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:01.669384  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:01.669570  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:01.743514  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:01.935589  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:01.935656  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:02.167411  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:02.168689  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:02.435625  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:02.436216  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:02.667966  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:02.669122  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:02.935118  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:02.935706  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:03.027938  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:03.169566  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:03.169726  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:03.436129  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:03.437296  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:03.670826  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:03.671006  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:03.743657  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	W1101 09:30:03.873883  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:03.873917  287891 retry.go:31] will retry after 5.89836222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:03.935669  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:03.935808  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:04.168500  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:04.168567  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:04.434991  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:04.435471  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:04.672089  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:04.672192  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:04.936105  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:04.936361  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:05.169115  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:05.169196  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:05.435283  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:05.435446  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:05.668810  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:05.669291  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:05.935797  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:05.936336  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:06.169106  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:06.169247  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:30:06.243073  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:06.435887  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:06.436099  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:06.667613  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:06.668768  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:06.935694  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:06.936036  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:07.168141  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:07.168954  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:07.435087  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:07.435312  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:07.673318  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:07.673490  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:07.935906  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:07.935983  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:08.167603  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:08.168753  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:08.243767  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:08.435051  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:08.435379  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:08.668710  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:08.669503  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:08.934853  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:08.935228  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:09.168256  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:09.169490  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:09.435966  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:09.436085  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:09.668340  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:09.669245  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:09.773384  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:09.943394  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:09.945151  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:10.170169  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:10.170324  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:10.437117  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:10.437515  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:30:10.599841  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:10.599875  287891 retry.go:31] will retry after 10.207833999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:10.668181  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:10.668716  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:10.743713  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:10.935115  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:10.935362  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:11.170330  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:11.171164  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:11.435023  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:11.436347  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:11.667305  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:11.668378  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:11.935278  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:11.935956  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:12.169144  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:12.169768  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:12.435760  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:12.436300  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:12.669366  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:12.669514  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:12.935340  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:12.935774  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:13.167402  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:13.168335  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:13.243207  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:13.435491  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:13.435703  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:13.667511  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:13.668401  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:13.937080  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:13.937184  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:14.167844  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:14.168925  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:14.435737  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:14.435909  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:14.668942  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:14.669104  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:14.935427  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:14.935827  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:15.167828  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:15.168871  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:15.435040  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:15.435242  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:15.669029  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:15.669280  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:30:15.743278  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:15.935547  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:15.935978  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:16.167933  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:16.169231  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:16.435715  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:16.436085  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:16.668879  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:16.669424  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:16.935723  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:16.936056  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:17.167911  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:17.169223  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:17.435535  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:17.435688  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:17.667876  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:17.668772  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:17.935116  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:17.935306  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:18.168833  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:18.169223  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:18.243028  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:18.435599  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:18.435922  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:18.667531  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:18.667823  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:18.934768  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:18.935422  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:19.168403  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:19.168935  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:19.435780  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:19.436138  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:19.667749  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:19.669398  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:19.935477  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:19.935689  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:20.167712  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:20.168919  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:20.243396  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:20.435954  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:20.436138  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:20.668103  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:20.668565  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:20.808082  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:20.936739  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:20.937088  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:21.168652  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:21.169529  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:21.436221  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:21.437083  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:21.669992  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:30:21.671073  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:21.671137  287891 retry.go:31] will retry after 18.178879218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:21.671733  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:21.936068  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:21.936201  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:22.177166  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:22.181558  287891 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:30:22.181583  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:22.248010  287891 node_ready.go:49] node "addons-720971" is "Ready"
	I1101 09:30:22.248041  287891 node_ready.go:38] duration metric: took 39.008163158s for node "addons-720971" to be "Ready" ...
	I1101 09:30:22.248064  287891 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:30:22.248136  287891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:30:22.270188  287891 api_server.go:72] duration metric: took 40.988489149s to wait for apiserver process to appear ...
	I1101 09:30:22.270218  287891 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:30:22.270237  287891 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 09:30:22.282789  287891 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 09:30:22.293181  287891 api_server.go:141] control plane version: v1.34.1
	I1101 09:30:22.293213  287891 api_server.go:131] duration metric: took 22.988366ms to wait for apiserver health ...
	I1101 09:30:22.293223  287891 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:30:22.358993  287891 system_pods.go:59] 19 kube-system pods found
	I1101 09:30:22.359029  287891 system_pods.go:61] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending
	I1101 09:30:22.359036  287891 system_pods.go:61] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending
	I1101 09:30:22.359040  287891 system_pods.go:61] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending
	I1101 09:30:22.359045  287891 system_pods.go:61] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending
	I1101 09:30:22.359050  287891 system_pods.go:61] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:22.359055  287891 system_pods.go:61] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:22.359059  287891 system_pods.go:61] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:22.359064  287891 system_pods.go:61] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:22.359068  287891 system_pods.go:61] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending
	I1101 09:30:22.359074  287891 system_pods.go:61] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:22.359079  287891 system_pods.go:61] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:22.359088  287891 system_pods.go:61] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:22.359101  287891 system_pods.go:61] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending
	I1101 09:30:22.359107  287891 system_pods.go:61] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending
	I1101 09:30:22.359114  287891 system_pods.go:61] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:22.359122  287891 system_pods.go:61] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending
	I1101 09:30:22.359128  287891 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending
	I1101 09:30:22.359136  287891 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.359146  287891 system_pods.go:61] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:22.359154  287891 system_pods.go:74] duration metric: took 65.923954ms to wait for pod list to return data ...
	I1101 09:30:22.359164  287891 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:30:22.369363  287891 default_sa.go:45] found service account: "default"
	I1101 09:30:22.369390  287891 default_sa.go:55] duration metric: took 10.217342ms for default service account to be created ...
	I1101 09:30:22.369400  287891 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:30:22.401552  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:22.401593  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending
	I1101 09:30:22.401599  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending
	I1101 09:30:22.401603  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending
	I1101 09:30:22.401608  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending
	I1101 09:30:22.401612  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:22.401616  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:22.401621  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:22.401625  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:22.401629  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending
	I1101 09:30:22.401633  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:22.401637  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:22.401654  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:22.401664  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending
	I1101 09:30:22.401670  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending
	I1101 09:30:22.401676  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:22.401750  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending
	I1101 09:30:22.401765  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.401773  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.401788  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:22.401810  287891 retry.go:31] will retry after 195.705468ms: missing components: kube-dns
	I1101 09:30:22.469840  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:22.469917  287891 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:30:22.469931  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:22.601827  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:22.601872  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:22.601879  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending
	I1101 09:30:22.601890  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:22.601897  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:22.601902  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:22.601907  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:22.601911  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:22.601921  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:22.601926  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending
	I1101 09:30:22.601930  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:22.601934  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:22.601953  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:22.601965  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending
	I1101 09:30:22.601972  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:22.601986  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:22.601991  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending
	I1101 09:30:22.601997  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.602007  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.602035  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:22.602054  287891 retry.go:31] will retry after 269.428383ms: missing components: kube-dns
	I1101 09:30:22.675155  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:22.679229  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:22.880157  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:22.880203  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:22.880214  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:22.880222  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:22.880230  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:22.880241  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:22.880252  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:22.880257  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:22.880275  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:22.880283  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:22.880292  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:22.880297  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:22.880303  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:22.880313  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:22.880322  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:22.880331  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:22.880338  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:22.880357  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.880365  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.880376  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:22.880392  287891 retry.go:31] will retry after 334.275735ms: missing components: kube-dns
	I1101 09:30:22.976734  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:22.977065  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:23.178462  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:23.178621  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:23.291237  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:23.291285  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:23.291295  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:23.291303  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:23.291311  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:23.291320  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:23.291326  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:23.291339  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:23.291352  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:23.291372  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:23.291376  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:23.291387  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:23.291394  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:23.291401  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:23.291409  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:23.291416  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:23.291433  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:23.291440  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:23.291455  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:23.291462  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:23.291482  287891 retry.go:31] will retry after 513.832273ms: missing components: kube-dns
	I1101 09:30:23.437920  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:23.438124  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:23.668020  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:23.675210  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:23.812023  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:23.812059  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:23.812069  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:23.812078  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:23.812086  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:23.812105  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:23.812110  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:23.812115  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:23.812125  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:23.812133  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:23.812143  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:23.812147  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:23.812154  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:23.812171  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:23.812177  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:23.812186  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:23.812193  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:23.812204  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:23.812212  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:23.812223  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:23.812245  287891 retry.go:31] will retry after 706.181805ms: missing components: kube-dns
	I1101 09:30:23.943966  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:23.945179  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:24.170451  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:24.170655  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:24.436545  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:24.436681  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:24.526040  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:24.526077  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Running
	I1101 09:30:24.526090  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:24.526097  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:24.526106  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:24.526111  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:24.526116  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:24.526125  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:24.526129  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:24.526137  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:24.526149  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:24.526155  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:24.526163  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:24.526175  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:24.526181  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:24.526195  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:24.526202  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:24.526208  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:24.526218  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:24.526223  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Running
	I1101 09:30:24.526232  287891 system_pods.go:126] duration metric: took 2.156825817s to wait for k8s-apps to be running ...
	I1101 09:30:24.526245  287891 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:30:24.526302  287891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:30:24.545784  287891 system_svc.go:56] duration metric: took 19.528281ms WaitForService to wait for kubelet
	I1101 09:30:24.545813  287891 kubeadm.go:587] duration metric: took 43.264134003s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:30:24.545832  287891 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:30:24.549754  287891 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:30:24.549789  287891 node_conditions.go:123] node cpu capacity is 2
	I1101 09:30:24.549802  287891 node_conditions.go:105] duration metric: took 3.964088ms to run NodePressure ...
	I1101 09:30:24.549814  287891 start.go:242] waiting for startup goroutines ...
	I1101 09:30:24.668143  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:24.669285  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:24.940082  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:24.940379  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:25.170281  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:25.170417  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:25.438147  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:25.438749  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:25.668069  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:25.670917  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:25.937376  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:25.937753  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:26.170656  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:26.171220  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:26.435577  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:26.436388  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:26.671244  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:26.671430  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:26.938090  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:26.938690  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:27.169111  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:27.169537  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:27.435793  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:27.436316  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:27.669065  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:27.669890  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:27.935947  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:27.936155  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:28.168609  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:28.170739  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:28.436754  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:28.437195  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:28.672029  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:28.672309  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:28.936359  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:28.936545  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:29.170146  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:29.170869  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:29.435233  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:29.435749  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:29.668438  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:29.668652  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:29.936297  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:29.936490  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:30.175062  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:30.176835  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:30.436478  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:30.437157  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:30.668788  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:30.669639  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:30.935260  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:30.935476  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:31.176833  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:31.177309  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:31.442785  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:31.443906  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:31.679573  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:31.680040  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:31.936891  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:31.936951  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:32.172333  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:32.173177  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:32.441538  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:32.442102  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:32.672261  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:32.672733  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:32.940175  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:32.940579  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:33.169243  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:33.169956  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:33.439153  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:33.439399  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:33.669892  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:33.670078  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:33.935537  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:33.935852  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:34.168361  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:34.168574  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:34.436877  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:34.436995  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:34.671820  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:34.672077  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:34.936058  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:34.936222  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:35.171344  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:35.172281  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:35.436996  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:35.437202  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:35.669868  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:35.673974  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:35.936738  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:35.937176  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:36.169598  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:36.171886  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:36.435409  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:36.435875  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:36.669652  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:36.669982  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:36.936432  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:36.936831  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:37.168633  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:37.170625  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:37.436833  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:37.437249  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:37.670685  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:37.670867  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:37.938381  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:37.939471  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:38.169230  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:38.169473  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:38.437906  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:38.438896  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:38.670934  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:38.671203  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:38.935577  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:38.936978  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:39.171994  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:39.172733  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:39.436155  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:39.436616  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:39.671239  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:39.671676  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:39.851136  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:39.937471  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:39.937941  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:40.173970  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:40.174354  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:40.438078  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:40.438521  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:40.674997  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:40.675479  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:40.949659  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:40.950377  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:41.027801  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.176576583s)
	W1101 09:30:41.027901  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:41.027963  287891 retry.go:31] will retry after 32.217754575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:41.170138  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:41.170656  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:41.437630  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:41.437792  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:41.668964  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:41.671815  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:41.937080  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:41.937438  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:42.172966  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:42.173174  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:42.437460  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:42.437905  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:42.672591  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:42.673022  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:42.937137  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:42.937507  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:43.221197  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:43.227143  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:43.437058  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:43.437411  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:43.672828  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:43.673224  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:43.936204  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:43.936314  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:44.171163  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:44.171365  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:44.437661  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:44.438020  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:44.669031  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:44.672809  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:44.940924  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:44.941323  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:45.178613  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:45.180062  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:45.439550  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:45.439996  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:45.673329  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:45.673771  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:45.937058  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:45.937492  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:46.170572  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:46.170855  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:46.436870  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:46.437258  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:46.670484  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:46.670896  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:46.936132  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:46.937384  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:47.170126  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:47.170651  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:47.436012  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:47.436492  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:47.671389  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:47.671616  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:47.936169  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:47.936311  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:48.170000  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:48.170345  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:48.437016  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:48.437212  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:48.671218  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:48.671303  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:48.936611  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:48.937266  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:49.168476  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:49.171588  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:49.437508  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:49.437940  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:49.672167  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:49.672500  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:49.934434  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:49.936357  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:50.169564  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:50.169760  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:50.437829  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:50.438211  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:50.674710  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:50.677122  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:50.937751  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:50.938077  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:51.172737  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:51.173355  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:51.440196  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:51.440699  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:51.668883  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:51.669027  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:51.934912  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:51.936136  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:52.170237  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:52.170728  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:52.436878  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:52.438229  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:52.702300  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:52.712110  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:52.951189  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:52.954209  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:53.170561  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:53.170653  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:53.467814  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:53.467901  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:53.671592  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:53.671785  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:53.936313  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:53.936453  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:54.170396  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:54.170993  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:54.435955  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:54.436850  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:54.668202  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:54.671053  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:54.935933  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:54.936611  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:55.171998  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:55.177801  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:55.439018  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:55.439198  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:55.668937  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:55.670914  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:55.935977  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:55.937070  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:56.170412  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:56.170565  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:56.436219  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:56.436748  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:56.669927  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:56.671126  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:56.936626  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:56.937044  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:57.169827  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:57.180220  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:57.436844  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:57.437615  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:57.670171  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:57.670632  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:57.935027  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:57.935185  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:58.174926  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:58.175121  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:58.436880  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:58.438013  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:58.667718  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:58.669228  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:58.935879  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:58.936021  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:59.169439  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:59.169650  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:59.435350  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:59.435569  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:59.669597  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:59.677941  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:59.935192  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:59.935450  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:00.232144  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:00.232747  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:00.437079  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:00.439374  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:31:00.670253  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:00.674398  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:00.936165  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:00.936316  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:31:01.169439  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:01.170582  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:01.448491  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:31:01.448924  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:01.670827  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:01.670976  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:01.938219  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:31:01.938465  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:02.170331  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:02.170591  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:02.435523  287891 kapi.go:107] duration metric: took 1m15.003814171s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:31:02.435684  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:02.669915  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:02.674151  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:02.937097  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:03.168899  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:03.170024  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:03.435609  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:03.667349  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:03.668926  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:03.935357  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:04.169225  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:04.169451  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:04.436041  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:04.668730  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:04.670479  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:04.936182  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:05.172973  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:05.173183  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:05.436123  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:05.670514  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:05.670694  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:05.934826  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:06.170289  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:06.170698  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:06.435301  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:06.669767  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:06.669961  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:06.934850  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:07.170347  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:07.170425  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:07.436224  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:07.670030  287891 kapi.go:107] duration metric: took 1m17.005325249s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:31:07.670553  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:07.673434  287891 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-720971 cluster.
	I1101 09:31:07.676291  287891 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:31:07.679379  287891 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:31:07.934649  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:08.171110  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:08.435716  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:08.671018  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:08.935831  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:09.169506  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:09.436531  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:09.670084  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:09.935279  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:10.170139  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:10.435352  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:10.669602  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:10.935568  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:11.170155  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:11.434919  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:11.669132  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:11.935161  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:12.169789  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:12.435750  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:12.672217  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:12.936384  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:13.169562  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:13.246793  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:31:13.442447  287891 kapi.go:107] duration metric: took 1m26.010649276s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:31:13.672238  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:14.170471  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:14.645281  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.398452607s)
	W1101 09:31:14.645315  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:31:14.645333  287891 retry.go:31] will retry after 16.272308355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:31:14.670941  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:15.172669  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:15.668870  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:16.169633  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:16.670952  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:17.179647  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:17.674210  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:18.186648  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:18.675606  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:19.171379  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:19.670152  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:20.170547  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:20.672162  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:21.174127  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:21.671498  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:22.168709  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:22.669045  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:23.169584  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:23.669164  287891 kapi.go:107] duration metric: took 1m36.003483485s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:31:30.918424  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:31:31.794110  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:31:31.794200  287891 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:31:31.797657  287891 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, default-storageclass, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1101 09:31:31.800546  287891 addons.go:515] duration metric: took 1m50.518577954s for enable addons: enabled=[registry-creds nvidia-device-plugin amd-gpu-device-plugin storage-provisioner default-storageclass cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1101 09:31:31.800593  287891 start.go:247] waiting for cluster config update ...
	I1101 09:31:31.800615  287891 start.go:256] writing updated cluster config ...
	I1101 09:31:31.800898  287891 ssh_runner.go:195] Run: rm -f paused
	I1101 09:31:31.804477  287891 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:31:31.808221  287891 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4fl56" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.812585  287891 pod_ready.go:94] pod "coredns-66bc5c9577-4fl56" is "Ready"
	I1101 09:31:31.812611  287891 pod_ready.go:86] duration metric: took 4.364148ms for pod "coredns-66bc5c9577-4fl56" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.814845  287891 pod_ready.go:83] waiting for pod "etcd-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.819242  287891 pod_ready.go:94] pod "etcd-addons-720971" is "Ready"
	I1101 09:31:31.819269  287891 pod_ready.go:86] duration metric: took 4.362761ms for pod "etcd-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.821678  287891 pod_ready.go:83] waiting for pod "kube-apiserver-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.826310  287891 pod_ready.go:94] pod "kube-apiserver-addons-720971" is "Ready"
	I1101 09:31:31.826375  287891 pod_ready.go:86] duration metric: took 4.591903ms for pod "kube-apiserver-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.828671  287891 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:32.208245  287891 pod_ready.go:94] pod "kube-controller-manager-addons-720971" is "Ready"
	I1101 09:31:32.208312  287891 pod_ready.go:86] duration metric: took 379.616372ms for pod "kube-controller-manager-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:32.408477  287891 pod_ready.go:83] waiting for pod "kube-proxy-p9fft" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:32.808783  287891 pod_ready.go:94] pod "kube-proxy-p9fft" is "Ready"
	I1101 09:31:32.808812  287891 pod_ready.go:86] duration metric: took 400.266182ms for pod "kube-proxy-p9fft" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:33.009433  287891 pod_ready.go:83] waiting for pod "kube-scheduler-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:33.408678  287891 pod_ready.go:94] pod "kube-scheduler-addons-720971" is "Ready"
	I1101 09:31:33.408710  287891 pod_ready.go:86] duration metric: took 399.250289ms for pod "kube-scheduler-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:33.408723  287891 pod_ready.go:40] duration metric: took 1.604217801s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:31:33.466762  287891 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:31:33.470270  287891 out.go:179] * Done! kubectl is now configured to use "addons-720971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:34:34 addons-720971 crio[832]: time="2025-11-01T09:34:34.978123419Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pfn8v Namespace:default ID:7d235b7342ec34aec5e74211626bfbc1643e127500a1c3b30065a2abf1e4f7bf UID:f4be11bd-69d2-4c31-8d81-80d4546d6aa9 NetNS:/var/run/netns/6e154d13-663d-4473-b9e0-37f2b112e62d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000ceba0}] Aliases:map[]}"
	Nov 01 09:34:34 addons-720971 crio[832]: time="2025-11-01T09:34:34.978180093Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-pfn8v to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:34:34 addons-720971 crio[832]: time="2025-11-01T09:34:34.996957155Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pfn8v Namespace:default ID:7d235b7342ec34aec5e74211626bfbc1643e127500a1c3b30065a2abf1e4f7bf UID:f4be11bd-69d2-4c31-8d81-80d4546d6aa9 NetNS:/var/run/netns/6e154d13-663d-4473-b9e0-37f2b112e62d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000ceba0}] Aliases:map[]}"
	Nov 01 09:34:34 addons-720971 crio[832]: time="2025-11-01T09:34:34.997129532Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-pfn8v for CNI network kindnet (type=ptp)"
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.004994328Z" level=info msg="Ran pod sandbox 7d235b7342ec34aec5e74211626bfbc1643e127500a1c3b30065a2abf1e4f7bf with infra container: default/hello-world-app-5d498dc89-pfn8v/POD" id=bc58f540-c3e1-4757-af83-2b86ded51433 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.010373585Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=fec26a1e-214e-4933-848c-f79dcee87e9e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.010640824Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=fec26a1e-214e-4933-848c-f79dcee87e9e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.010758463Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=fec26a1e-214e-4933-848c-f79dcee87e9e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.015598485Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=2ea76e00-3617-4fc1-adea-66c162608cea name=/runtime.v1.ImageService/PullImage
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.019147536Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.115932207Z" level=info msg="Removing container: 17e83d6dea58211bedf568eb3c7955f58a20ed6a6a3f8062b3eaed8d94ee58a0" id=f567305e-d7d4-4ba0-9ed0-14fe860c003b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.128037768Z" level=info msg="Error loading conmon cgroup of container 17e83d6dea58211bedf568eb3c7955f58a20ed6a6a3f8062b3eaed8d94ee58a0: cgroup deleted" id=f567305e-d7d4-4ba0-9ed0-14fe860c003b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.134599507Z" level=info msg="Removed container 17e83d6dea58211bedf568eb3c7955f58a20ed6a6a3f8062b3eaed8d94ee58a0: kube-system/registry-creds-764b6fb674-7sxv4/registry-creds" id=f567305e-d7d4-4ba0-9ed0-14fe860c003b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.645673269Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=2ea76e00-3617-4fc1-adea-66c162608cea name=/runtime.v1.ImageService/PullImage
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.654323041Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ea4777a9-2392-4f26-ac59-1d99ae4fe592 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.657010036Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=76660a09-9fea-4139-ae55-e9c60c4aa3ac name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.683232233Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-pfn8v/hello-world-app" id=a83fc29b-b3d9-4b7c-bb0f-24d4a89e9248 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.683369565Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.695015515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.695365093Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/68403c09510e84108e3630bb61449e79d747058481a38d7feeb2d34ecf43f4ca/merged/etc/passwd: no such file or directory"
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.69545714Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/68403c09510e84108e3630bb61449e79d747058481a38d7feeb2d34ecf43f4ca/merged/etc/group: no such file or directory"
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.695793853Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.725926651Z" level=info msg="Created container 59552100708b3be53cfa32740f41315a47e55405efcd4c94a93056d27c7c9cec: default/hello-world-app-5d498dc89-pfn8v/hello-world-app" id=a83fc29b-b3d9-4b7c-bb0f-24d4a89e9248 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.73255797Z" level=info msg="Starting container: 59552100708b3be53cfa32740f41315a47e55405efcd4c94a93056d27c7c9cec" id=e576b54a-9c0e-4206-9acd-ce06c96b02e3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:34:35 addons-720971 crio[832]: time="2025-11-01T09:34:35.740468035Z" level=info msg="Started container" PID=7189 containerID=59552100708b3be53cfa32740f41315a47e55405efcd4c94a93056d27c7c9cec description=default/hello-world-app-5d498dc89-pfn8v/hello-world-app id=e576b54a-9c0e-4206-9acd-ce06c96b02e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d235b7342ec34aec5e74211626bfbc1643e127500a1c3b30065a2abf1e4f7bf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	59552100708b3       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   7d235b7342ec3       hello-world-app-5d498dc89-pfn8v             default
	72f5c0d009e15       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             2 seconds ago            Exited              registry-creds                           1                   03e7ddd331be2       registry-creds-764b6fb674-7sxv4             kube-system
	229b4844d3ea2       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   6836d7be0c5a4       nginx                                       default
	c8492900ba1f0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   906f95fb04e6d       busybox                                     default
	303b571899533       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	e66b9ccb0c01f       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	6cf6775444e13       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	3f38970b15f05       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	c15dba784eeb1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   56bd7113acce1       gadget-f6mdx                                gadget
	2f69e6ade4240       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   10f95ef6363d1       ingress-nginx-controller-675c5ddd98-gkdm4   ingress-nginx
	39c463f92bb15       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   9f02e867cc79b       gcp-auth-78565c9fb4-plnxs                   gcp-auth
	43580d85746e5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	8fe3992cfeef6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   7c680bb1adf7a       registry-proxy-tml2d                        kube-system
	d4f55b3c93144       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   a818dc3c8bcaf       csi-hostpath-attacher-0                     kube-system
	cee7ed9ce1f56       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   2bbf2b4a592bc       csi-hostpath-resizer-0                      kube-system
	663937f8140cd       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   fe385ca222b54       cloud-spanner-emulator-86bd5cbb97-n8sf9     default
	a4e79c5cf7b96       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   a015779c2df17       kube-ingress-dns-minikube                   kube-system
	64a188cb4e7e1       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   0838e59fae3ce       yakd-dashboard-5ff678cb9-p9f57              yakd-dashboard
	86e9c5d9f6cea       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   54847aaeb8447       nvidia-device-plugin-daemonset-6xjv5        kube-system
	0e99ffc2f9984       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   33b32ba722b74       local-path-provisioner-648f6765c9-pxbsb     local-path-storage
	2ee6be51ad680       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              patch                                    0                   24ec6bbbb59ad       ingress-nginx-admission-patch-7jj6d         ingress-nginx
	b30f47b175d57       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   9d1c00aaf96e2       snapshot-controller-7d9fbc56b8-dnt8c        kube-system
	203e43681277e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              create                                   0                   0c33bd81780b9       ingress-nginx-admission-create-4f8fn        ingress-nginx
	e02cb9b41b9b1       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   7f3e61b014c7c       metrics-server-85b7d694d7-pv7v7             kube-system
	8e4b16182fc32       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   3e9976a06df86       snapshot-controller-7d9fbc56b8-kph7c        kube-system
	012c36c742b1d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago            Running             csi-external-health-monitor-controller   0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	c87eccd73057d       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   8f40ee15d32fd       registry-6b586f9694-5d8hv                   kube-system
	b28d2db9811d7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   4d214b5c6dde6       coredns-66bc5c9577-4fl56                    kube-system
	1aab4e12b2651       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   41c3e7de2a6c8       storage-provisioner                         kube-system
	fd15c88e36dcc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   e1d13dc9cbe2f       kindnet-trnz5                               kube-system
	5d768341f5651       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   4cec8d85d37a0       kube-proxy-p9fft                            kube-system
	243fa64c16788       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   6f3f729a2be24       kube-controller-manager-addons-720971       kube-system
	4ab2a5f98b253       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   3c308651bb70f       etcd-addons-720971                          kube-system
	f1c57c321c093       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   6cf05cf176b49       kube-scheduler-addons-720971                kube-system
	74a9b3705b5e1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   518c87b10b31f       kube-apiserver-addons-720971                kube-system
	
	
	==> coredns [b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0] <==
	[INFO] 10.244.0.18:60240 - 8823 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002536505s
	[INFO] 10.244.0.18:60240 - 227 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000169974s
	[INFO] 10.244.0.18:60240 - 17874 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00010351s
	[INFO] 10.244.0.18:49149 - 57706 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160816s
	[INFO] 10.244.0.18:49149 - 57493 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00022298s
	[INFO] 10.244.0.18:53642 - 16818 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013227s
	[INFO] 10.244.0.18:53642 - 16638 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079083s
	[INFO] 10.244.0.18:42910 - 33333 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087444s
	[INFO] 10.244.0.18:42910 - 33153 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007155s
	[INFO] 10.244.0.18:55701 - 50758 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001336583s
	[INFO] 10.244.0.18:55701 - 50536 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001315823s
	[INFO] 10.244.0.18:58373 - 43896 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122111s
	[INFO] 10.244.0.18:58373 - 43740 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000162335s
	[INFO] 10.244.0.19:40352 - 39291 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000175676s
	[INFO] 10.244.0.19:52901 - 36148 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000202417s
	[INFO] 10.244.0.19:41757 - 1701 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000213789s
	[INFO] 10.244.0.19:52697 - 56904 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000165337s
	[INFO] 10.244.0.19:59663 - 23538 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141846s
	[INFO] 10.244.0.19:43498 - 59881 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009725s
	[INFO] 10.244.0.19:47490 - 42253 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002550338s
	[INFO] 10.244.0.19:36679 - 56440 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001803802s
	[INFO] 10.244.0.19:42030 - 9245 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003643977s
	[INFO] 10.244.0.19:60706 - 46114 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003180188s
	[INFO] 10.244.0.23:32817 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000516689s
	[INFO] 10.244.0.23:55312 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160274s
	
	
	==> describe nodes <==
	Name:               addons-720971
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-720971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=addons-720971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_29_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-720971
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-720971"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:29:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-720971
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:34:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:33:41 +0000   Sat, 01 Nov 2025 09:29:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:33:41 +0000   Sat, 01 Nov 2025 09:29:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:33:41 +0000   Sat, 01 Nov 2025 09:29:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:33:41 +0000   Sat, 01 Nov 2025 09:30:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-720971
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                83b5d1ed-3170-4ffb-be3a-c9b9b98815af
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     cloud-spanner-emulator-86bd5cbb97-n8sf9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  default                     hello-world-app-5d498dc89-pfn8v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-f6mdx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  gcp-auth                    gcp-auth-78565c9fb4-plnxs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-gkdm4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m49s
	  kube-system                 coredns-66bc5c9577-4fl56                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m55s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpathplugin-hc2br                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 etcd-addons-720971                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m1s
	  kube-system                 kindnet-trnz5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m56s
	  kube-system                 kube-apiserver-addons-720971                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-controller-manager-addons-720971        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-proxy-p9fft                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-addons-720971                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 metrics-server-85b7d694d7-pv7v7              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m50s
	  kube-system                 nvidia-device-plugin-daemonset-6xjv5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 registry-6b586f9694-5d8hv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 registry-creds-764b6fb674-7sxv4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-proxy-tml2d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-dnt8c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 snapshot-controller-7d9fbc56b8-kph7c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  local-path-storage          local-path-provisioner-648f6765c9-pxbsb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-p9f57               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m55s                kube-proxy       
	  Normal   Starting                 5m9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m8s (x8 over 5m8s)  kubelet          Node addons-720971 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m8s (x8 over 5m8s)  kubelet          Node addons-720971 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m8s (x8 over 5m8s)  kubelet          Node addons-720971 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m1s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m1s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m1s                 kubelet          Node addons-720971 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m1s                 kubelet          Node addons-720971 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m1s                 kubelet          Node addons-720971 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m57s                node-controller  Node addons-720971 event: Registered Node addons-720971 in Controller
	  Normal   NodeReady                4m15s                kubelet          Node addons-720971 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014572] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501039] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033197] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753566] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.779214] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:03] hrtimer: interrupt took 8309137 ns
	[Nov 1 09:28] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[  +0.061702] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8] <==
	{"level":"warn","ts":"2025-11-01T09:29:31.472290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.501796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.532982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.577643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.598517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.648203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.670585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.701868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.731727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.751259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.773760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.814460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.833642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.868604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.897747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.931014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.961506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.981851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:32.142659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:48.125786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:48.141820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:09.865308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:09.887251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:09.933742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:09.947993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36252","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [39c463f92bb152c7e8a166839eda7f4aadd487376b16e33629e6bc53f8bd719e] <==
	2025/11/01 09:31:06 GCP Auth Webhook started!
	2025/11/01 09:31:34 Ready to marshal response ...
	2025/11/01 09:31:34 Ready to write response ...
	2025/11/01 09:31:34 Ready to marshal response ...
	2025/11/01 09:31:34 Ready to write response ...
	2025/11/01 09:31:34 Ready to marshal response ...
	2025/11/01 09:31:34 Ready to write response ...
	2025/11/01 09:31:55 Ready to marshal response ...
	2025/11/01 09:31:55 Ready to write response ...
	2025/11/01 09:32:01 Ready to marshal response ...
	2025/11/01 09:32:01 Ready to write response ...
	2025/11/01 09:32:13 Ready to marshal response ...
	2025/11/01 09:32:13 Ready to write response ...
	2025/11/01 09:32:25 Ready to marshal response ...
	2025/11/01 09:32:25 Ready to write response ...
	2025/11/01 09:32:46 Ready to marshal response ...
	2025/11/01 09:32:46 Ready to write response ...
	2025/11/01 09:32:46 Ready to marshal response ...
	2025/11/01 09:32:46 Ready to write response ...
	2025/11/01 09:32:54 Ready to marshal response ...
	2025/11/01 09:32:54 Ready to write response ...
	2025/11/01 09:34:34 Ready to marshal response ...
	2025/11/01 09:34:34 Ready to write response ...
	
	
	==> kernel <==
	 09:34:36 up  1:17,  0 user,  load average: 0.93, 1.73, 2.71
	Linux addons-720971 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad] <==
	I1101 09:32:31.444823       1 main.go:301] handling current node
	I1101 09:32:41.445035       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:32:41.445187       1 main.go:301] handling current node
	I1101 09:32:51.445391       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:32:51.445494       1 main.go:301] handling current node
	I1101 09:33:01.444540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:01.444614       1 main.go:301] handling current node
	I1101 09:33:11.444676       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:11.444709       1 main.go:301] handling current node
	I1101 09:33:21.446850       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:21.446889       1 main.go:301] handling current node
	I1101 09:33:31.444466       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:31.444501       1 main.go:301] handling current node
	I1101 09:33:41.447153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:41.447255       1 main.go:301] handling current node
	I1101 09:33:51.449563       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:51.449602       1 main.go:301] handling current node
	I1101 09:34:01.444237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:34:01.444269       1 main.go:301] handling current node
	I1101 09:34:11.450447       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:34:11.450556       1 main.go:301] handling current node
	I1101 09:34:21.449755       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:34:21.449789       1 main.go:301] handling current node
	I1101 09:34:31.451425       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:34:31.451459       1 main.go:301] handling current node
	
	
	==> kube-apiserver [74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea] <==
	W1101 09:30:09.946849       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:30:22.017039       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.201.89:443: connect: connection refused
	E1101 09:30:22.017161       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.201.89:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:22.017180       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.201.89:443: connect: connection refused
	E1101 09:30:22.017905       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.201.89:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:22.099229       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.201.89:443: connect: connection refused
	E1101 09:30:22.099271       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.201.89:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:43.090857       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.120.50:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:43.091246       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 09:30:43.091306       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:30:43.092219       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.120.50:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:43.097066       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.120.50:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:43.118335       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.120.50:443: connect: connection refused" logger="UnhandledError"
	I1101 09:30:43.277179       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:31:43.849433       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47402: use of closed network connection
	E1101 09:31:44.086634       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47410: use of closed network connection
	E1101 09:31:44.228059       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47430: use of closed network connection
	I1101 09:32:12.959452       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 09:32:13.262950       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.19.57"}
	I1101 09:32:13.368550       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1101 09:32:15.582526       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1101 09:34:34.854376       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.220.133"}
	
	
	==> kube-controller-manager [243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d] <==
	I1101 09:29:39.884282       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:29:39.885278       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:29:39.885539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:29:39.885592       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:29:39.886519       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:29:39.886560       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:29:39.886552       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:29:39.886649       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:29:39.886690       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:29:39.886755       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:29:39.886539       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:29:39.889141       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:29:39.889129       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:29:39.891510       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	E1101 09:29:46.126832       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1101 09:30:09.857055       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:30:09.857230       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 09:30:09.857290       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:30:09.888702       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:30:09.894568       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:30:09.960342       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:30:09.995816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:30:24.856401       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1101 09:30:39.965629       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:30:40.005811       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147] <==
	I1101 09:29:41.234359       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:29:41.335258       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:29:41.440188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:29:41.440223       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:29:41.440299       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:29:41.513178       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:29:41.513304       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:29:41.536328       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:29:41.536727       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:29:41.536951       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:29:41.542191       1 config.go:200] "Starting service config controller"
	I1101 09:29:41.542275       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:29:41.542320       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:29:41.542368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:29:41.542405       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:29:41.542439       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:29:41.543190       1 config.go:309] "Starting node config controller"
	I1101 09:29:41.543328       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:29:41.543371       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:29:41.643088       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:29:41.643222       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:29:41.643241       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440] <==
	I1101 09:29:34.084755       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:29:34.092349       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:29:34.092707       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:29:34.092740       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:29:34.092762       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 09:29:34.103067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:29:34.103227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:29:34.103330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:29:34.110119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:29:34.110724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:29:34.110867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:29:34.111072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:29:34.111173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:29:34.111272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:29:34.111358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:29:34.111451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:29:34.111540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:29:34.111628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:29:34.111733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:29:34.111876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:29:34.111987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:29:34.112066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:29:34.112586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:29:34.112697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1101 09:29:35.193191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:32:56 addons-720971 kubelet[1270]: I1101 09:32:56.734193    1270 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8eca106e64845da7ad697b739ab166ba03a8ace266fb5fd08789b42b1d6e75c"
	Nov 01 09:32:56 addons-720971 kubelet[1270]: E1101 09:32:56.736353    1270 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-13036f40-77fc-479b-8d89-adac40366789\" is forbidden: User \"system:node:addons-720971\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-720971' and this object" podUID="32e8c4f4-ddcf-4995-80ff-56d0e37c5f10" pod="local-path-storage/helper-pod-delete-pvc-13036f40-77fc-479b-8d89-adac40366789"
	Nov 01 09:32:57 addons-720971 kubelet[1270]: I1101 09:32:57.626427    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32e8c4f4-ddcf-4995-80ff-56d0e37c5f10" path="/var/lib/kubelet/pods/32e8c4f4-ddcf-4995-80ff-56d0e37c5f10/volumes"
	Nov 01 09:32:58 addons-720971 kubelet[1270]: I1101 09:32:58.623777    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-6xjv5" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:33:03 addons-720971 kubelet[1270]: I1101 09:33:03.623812    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-5d8hv" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:33:35 addons-720971 kubelet[1270]: I1101 09:33:35.594478    1270 scope.go:117] "RemoveContainer" containerID="41c0e5496253249de5a59c4b9688dbb2f262691028fd1fbbeab6a635d206caa7"
	Nov 01 09:33:35 addons-720971 kubelet[1270]: I1101 09:33:35.603534    1270 scope.go:117] "RemoveContainer" containerID="669ef674c03561b6f327249a2380594f4d1286944a72bc274cdddfd797c9af10"
	Nov 01 09:33:47 addons-720971 kubelet[1270]: I1101 09:33:47.624606    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tml2d" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:34:00 addons-720971 kubelet[1270]: I1101 09:34:00.624602    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-6xjv5" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:34:30 addons-720971 kubelet[1270]: I1101 09:34:30.623952    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-5d8hv" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:34:32 addons-720971 kubelet[1270]: I1101 09:34:32.224920    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7sxv4" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:34:32 addons-720971 kubelet[1270]: W1101 09:34:32.252210    1270 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/crio-03e7ddd331be259c84522f31200ea3309a35164a44f5b306b5594ae651b4738e WatchSource:0}: Error finding container 03e7ddd331be259c84522f31200ea3309a35164a44f5b306b5594ae651b4738e: Status 404 returned error can't find the container with id 03e7ddd331be259c84522f31200ea3309a35164a44f5b306b5594ae651b4738e
	Nov 01 09:34:34 addons-720971 kubelet[1270]: I1101 09:34:34.093515    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7sxv4" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:34:34 addons-720971 kubelet[1270]: I1101 09:34:34.093574    1270 scope.go:117] "RemoveContainer" containerID="17e83d6dea58211bedf568eb3c7955f58a20ed6a6a3f8062b3eaed8d94ee58a0"
	Nov 01 09:34:34 addons-720971 kubelet[1270]: I1101 09:34:34.722418    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27nwg\" (UniqueName: \"kubernetes.io/projected/f4be11bd-69d2-4c31-8d81-80d4546d6aa9-kube-api-access-27nwg\") pod \"hello-world-app-5d498dc89-pfn8v\" (UID: \"f4be11bd-69d2-4c31-8d81-80d4546d6aa9\") " pod="default/hello-world-app-5d498dc89-pfn8v"
	Nov 01 09:34:34 addons-720971 kubelet[1270]: I1101 09:34:34.723037    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f4be11bd-69d2-4c31-8d81-80d4546d6aa9-gcp-creds\") pod \"hello-world-app-5d498dc89-pfn8v\" (UID: \"f4be11bd-69d2-4c31-8d81-80d4546d6aa9\") " pod="default/hello-world-app-5d498dc89-pfn8v"
	Nov 01 09:34:35 addons-720971 kubelet[1270]: W1101 09:34:35.003062    1270 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/crio-7d235b7342ec34aec5e74211626bfbc1643e127500a1c3b30065a2abf1e4f7bf WatchSource:0}: Error finding container 7d235b7342ec34aec5e74211626bfbc1643e127500a1c3b30065a2abf1e4f7bf: Status 404 returned error can't find the container with id 7d235b7342ec34aec5e74211626bfbc1643e127500a1c3b30065a2abf1e4f7bf
	Nov 01 09:34:35 addons-720971 kubelet[1270]: I1101 09:34:35.100708    1270 scope.go:117] "RemoveContainer" containerID="17e83d6dea58211bedf568eb3c7955f58a20ed6a6a3f8062b3eaed8d94ee58a0"
	Nov 01 09:34:35 addons-720971 kubelet[1270]: I1101 09:34:35.101029    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7sxv4" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:34:35 addons-720971 kubelet[1270]: I1101 09:34:35.101066    1270 scope.go:117] "RemoveContainer" containerID="72f5c0d009e157120bffd9f67e17392aad376806601adb6cb2730a59960b873b"
	Nov 01 09:34:35 addons-720971 kubelet[1270]: E1101 09:34:35.101219    1270 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7sxv4_kube-system(f830ed47-72eb-4e5e-b87f-fb1b4985d259)\"" pod="kube-system/registry-creds-764b6fb674-7sxv4" podUID="f830ed47-72eb-4e5e-b87f-fb1b4985d259"
	Nov 01 09:34:36 addons-720971 kubelet[1270]: I1101 09:34:36.123760    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7sxv4" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:34:36 addons-720971 kubelet[1270]: I1101 09:34:36.124397    1270 scope.go:117] "RemoveContainer" containerID="72f5c0d009e157120bffd9f67e17392aad376806601adb6cb2730a59960b873b"
	Nov 01 09:34:36 addons-720971 kubelet[1270]: E1101 09:34:36.126760    1270 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7sxv4_kube-system(f830ed47-72eb-4e5e-b87f-fb1b4985d259)\"" pod="kube-system/registry-creds-764b6fb674-7sxv4" podUID="f830ed47-72eb-4e5e-b87f-fb1b4985d259"
	Nov 01 09:34:36 addons-720971 kubelet[1270]: I1101 09:34:36.175083    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-pfn8v" podStartSLOduration=1.5319470160000002 podStartE2EDuration="2.175063716s" podCreationTimestamp="2025-11-01 09:34:34 +0000 UTC" firstStartedPulling="2025-11-01 09:34:35.012334133 +0000 UTC m=+299.619309346" lastFinishedPulling="2025-11-01 09:34:35.655450825 +0000 UTC m=+300.262426046" observedRunningTime="2025-11-01 09:34:36.173361898 +0000 UTC m=+300.780337127" watchObservedRunningTime="2025-11-01 09:34:36.175063716 +0000 UTC m=+300.782038921"
	
	
	==> storage-provisioner [1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1] <==
	W1101 09:34:12.481786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:14.484936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:14.491643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:16.494374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:16.498840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:18.501484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:18.506304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:20.509577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:20.514811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:22.517812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:22.522688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:24.526502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:24.533229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:26.536138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:26.540980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:28.544457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:28.551147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:30.554504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:30.561390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:32.565903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:32.571622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:34.575768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:34.580686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:36.584214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:36.591386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-720971 -n addons-720971
helpers_test.go:269: (dbg) Run:  kubectl --context addons-720971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-4f8fn ingress-nginx-admission-patch-7jj6d
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-720971 describe pod ingress-nginx-admission-create-4f8fn ingress-nginx-admission-patch-7jj6d
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-720971 describe pod ingress-nginx-admission-create-4f8fn ingress-nginx-admission-patch-7jj6d: exit status 1 (110.542058ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4f8fn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7jj6d" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-720971 describe pod ingress-nginx-admission-create-4f8fn ingress-nginx-admission-patch-7jj6d: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (285.337958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:34:38.163261  297582 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:34:38.164160  297582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:38.164198  297582 out.go:374] Setting ErrFile to fd 2...
	I1101 09:34:38.164221  297582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:38.164499  297582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:34:38.164823  297582 mustload.go:66] Loading cluster: addons-720971
	I1101 09:34:38.165275  297582 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:38.165313  297582 addons.go:607] checking whether the cluster is paused
	I1101 09:34:38.165455  297582 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:38.165484  297582 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:34:38.165999  297582 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:34:38.184290  297582 ssh_runner.go:195] Run: systemctl --version
	I1101 09:34:38.184351  297582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:34:38.214951  297582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:34:38.316262  297582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:34:38.316355  297582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:34:38.353790  297582 cri.go:89] found id: "72f5c0d009e157120bffd9f67e17392aad376806601adb6cb2730a59960b873b"
	I1101 09:34:38.353820  297582 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:34:38.353825  297582 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:34:38.353829  297582 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:34:38.353833  297582 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:34:38.353836  297582 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:34:38.353840  297582 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:34:38.353843  297582 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:34:38.353867  297582 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:34:38.353887  297582 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:34:38.353892  297582 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:34:38.353895  297582 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:34:38.353898  297582 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:34:38.353902  297582 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:34:38.353905  297582 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:34:38.353914  297582 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:34:38.353922  297582 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:34:38.353940  297582 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:34:38.353946  297582 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:34:38.353949  297582 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:34:38.353962  297582 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:34:38.353965  297582 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:34:38.353969  297582 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:34:38.353976  297582 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:34:38.353980  297582 cri.go:89] found id: ""
	I1101 09:34:38.354048  297582 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:34:38.370508  297582 out.go:203] 
	W1101 09:34:38.373768  297582 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:34:38.373799  297582 out.go:285] * 
	* 
	W1101 09:34:38.380391  297582 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:34:38.386392  297582 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable ingress --alsologtostderr -v=1: exit status 11 (268.690647ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:34:38.446599  297627 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:34:38.447404  297627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:38.447443  297627 out.go:374] Setting ErrFile to fd 2...
	I1101 09:34:38.447467  297627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:38.447770  297627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:34:38.448108  297627 mustload.go:66] Loading cluster: addons-720971
	I1101 09:34:38.448524  297627 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:38.448569  297627 addons.go:607] checking whether the cluster is paused
	I1101 09:34:38.448697  297627 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:38.448731  297627 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:34:38.449224  297627 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:34:38.467018  297627 ssh_runner.go:195] Run: systemctl --version
	I1101 09:34:38.467081  297627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:34:38.485021  297627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:34:38.593821  297627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:34:38.593937  297627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:34:38.628527  297627 cri.go:89] found id: "72f5c0d009e157120bffd9f67e17392aad376806601adb6cb2730a59960b873b"
	I1101 09:34:38.628560  297627 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:34:38.628566  297627 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:34:38.628570  297627 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:34:38.628574  297627 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:34:38.628578  297627 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:34:38.628581  297627 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:34:38.628584  297627 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:34:38.628588  297627 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:34:38.628594  297627 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:34:38.628597  297627 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:34:38.628601  297627 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:34:38.628604  297627 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:34:38.628607  297627 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:34:38.628611  297627 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:34:38.628620  297627 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:34:38.628631  297627 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:34:38.628636  297627 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:34:38.628639  297627 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:34:38.628642  297627 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:34:38.628648  297627 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:34:38.628651  297627 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:34:38.628654  297627 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:34:38.628658  297627 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:34:38.628661  297627 cri.go:89] found id: ""
	I1101 09:34:38.628716  297627 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:34:38.643645  297627 out.go:203] 
	W1101 09:34:38.646275  297627 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:34:38.646302  297627 out.go:285] * 
	* 
	W1101 09:34:38.652542  297627 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:34:38.655221  297627 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-f6mdx" [06150c7d-fc2f-4bff-92b7-8baa1930fff0] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003873178s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (266.987241ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:32:12.436411  295121 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:32:12.437413  295121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:12.437455  295121 out.go:374] Setting ErrFile to fd 2...
	I1101 09:32:12.437477  295121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:12.437811  295121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:32:12.438175  295121 mustload.go:66] Loading cluster: addons-720971
	I1101 09:32:12.438580  295121 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:12.438622  295121 addons.go:607] checking whether the cluster is paused
	I1101 09:32:12.438750  295121 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:12.438785  295121 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:32:12.439333  295121 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:32:12.457846  295121 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:12.457942  295121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:32:12.475250  295121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:32:12.585683  295121 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:12.585817  295121 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:12.616007  295121 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:32:12.616033  295121 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:32:12.616038  295121 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:32:12.616042  295121 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:32:12.616045  295121 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:32:12.616049  295121 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:32:12.616052  295121 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:32:12.616056  295121 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:32:12.616058  295121 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:32:12.616065  295121 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:32:12.616068  295121 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:32:12.616071  295121 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:32:12.616075  295121 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:32:12.616079  295121 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:32:12.616082  295121 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:32:12.616088  295121 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:32:12.616096  295121 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:32:12.616100  295121 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:32:12.616103  295121 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:32:12.616106  295121 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:32:12.616110  295121 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:32:12.616114  295121 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:32:12.616117  295121 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:32:12.616132  295121 cri.go:89] found id: ""
	I1101 09:32:12.616182  295121 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:32:12.631100  295121 out.go:203] 
	W1101 09:32:12.634147  295121 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:32:12.634174  295121 out.go:285] * 
	* 
	W1101 09:32:12.640721  295121 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:32:12.647665  295121 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 14.175061ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004167854s
addons_test.go:463: (dbg) Run:  kubectl --context addons-720971 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (283.753534ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:32:06.162878  295053 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:32:06.163737  295053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:06.163753  295053 out.go:374] Setting ErrFile to fd 2...
	I1101 09:32:06.163758  295053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:06.164063  295053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:32:06.164402  295053 mustload.go:66] Loading cluster: addons-720971
	I1101 09:32:06.164869  295053 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:06.164892  295053 addons.go:607] checking whether the cluster is paused
	I1101 09:32:06.165049  295053 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:06.165070  295053 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:32:06.165574  295053 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:32:06.183779  295053 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:06.183849  295053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:32:06.201675  295053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:32:06.308131  295053 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:06.308209  295053 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:06.343116  295053 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:32:06.343138  295053 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:32:06.343149  295053 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:32:06.343153  295053 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:32:06.343157  295053 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:32:06.343161  295053 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:32:06.343164  295053 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:32:06.343167  295053 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:32:06.343170  295053 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:32:06.343176  295053 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:32:06.343179  295053 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:32:06.343182  295053 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:32:06.343186  295053 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:32:06.343189  295053 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:32:06.343192  295053 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:32:06.343197  295053 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:32:06.343204  295053 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:32:06.343208  295053 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:32:06.343211  295053 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:32:06.343214  295053 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:32:06.343218  295053 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:32:06.343222  295053 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:32:06.343225  295053 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:32:06.343227  295053 cri.go:89] found id: ""
	I1101 09:32:06.343276  295053 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:32:06.359978  295053 out.go:203] 
	W1101 09:32:06.363063  295053 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:32:06.363101  295053 out.go:285] * 
	* 
	W1101 09:32:06.369474  295053 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:32:06.375719  295053 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 09:31:47.865654  287135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 09:31:47.870354  287135 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 09:31:47.870383  287135 kapi.go:107] duration metric: took 4.742977ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.753078ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-720971 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-720971 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [915e848b-1c96-49fe-8f14-684ccbc70a0c] Pending
helpers_test.go:352: "task-pv-pod" [915e848b-1c96-49fe-8f14-684ccbc70a0c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [915e848b-1c96-49fe-8f14-684ccbc70a0c] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003738312s
addons_test.go:572: (dbg) Run:  kubectl --context addons-720971 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-720971 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-720971 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-720971 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-720971 delete pod task-pv-pod: (1.073700699s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-720971 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-720971 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-720971 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [aa6fe034-689b-424f-8e98-dbbd66b0689b] Pending
helpers_test.go:352: "task-pv-pod-restore" [aa6fe034-689b-424f-8e98-dbbd66b0689b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [aa6fe034-689b-424f-8e98-dbbd66b0689b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003382581s
addons_test.go:614: (dbg) Run:  kubectl --context addons-720971 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-720971 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-720971 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (260.511709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:32:33.386594  295820 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:32:33.387367  295820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:33.387389  295820 out.go:374] Setting ErrFile to fd 2...
	I1101 09:32:33.387396  295820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:33.387659  295820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:32:33.387958  295820 mustload.go:66] Loading cluster: addons-720971
	I1101 09:32:33.388363  295820 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:33.388383  295820 addons.go:607] checking whether the cluster is paused
	I1101 09:32:33.388493  295820 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:33.388508  295820 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:32:33.389017  295820 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:32:33.408161  295820 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:33.408219  295820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:32:33.426905  295820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:32:33.532361  295820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:33.532459  295820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:33.566555  295820 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:32:33.566579  295820 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:32:33.566585  295820 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:32:33.566589  295820 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:32:33.566593  295820 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:32:33.566598  295820 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:32:33.566601  295820 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:32:33.566605  295820 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:32:33.566609  295820 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:32:33.566617  295820 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:32:33.566621  295820 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:32:33.566625  295820 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:32:33.566630  295820 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:32:33.566633  295820 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:32:33.566637  295820 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:32:33.566646  295820 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:32:33.566657  295820 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:32:33.566663  295820 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:32:33.566666  295820 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:32:33.566669  295820 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:32:33.566674  295820 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:32:33.566677  295820 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:32:33.566680  295820 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:32:33.566683  295820 cri.go:89] found id: ""
	I1101 09:32:33.566737  295820 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:32:33.581881  295820 out.go:203] 
	W1101 09:32:33.584832  295820 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:32:33.584861  295820 out.go:285] * 
	* 
	W1101 09:32:33.591274  295820 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:32:33.594259  295820 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (278.808468ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:32:33.668923  295863 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:32:33.669638  295863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:33.669655  295863 out.go:374] Setting ErrFile to fd 2...
	I1101 09:32:33.669663  295863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:33.670014  295863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:32:33.670356  295863 mustload.go:66] Loading cluster: addons-720971
	I1101 09:32:33.670784  295863 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:33.670807  295863 addons.go:607] checking whether the cluster is paused
	I1101 09:32:33.670953  295863 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:33.670973  295863 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:32:33.671506  295863 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:32:33.690220  295863 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:33.690277  295863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:32:33.708132  295863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:32:33.816098  295863 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:33.816178  295863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:33.847182  295863 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:32:33.847224  295863 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:32:33.847231  295863 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:32:33.847236  295863 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:32:33.847239  295863 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:32:33.847244  295863 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:32:33.847247  295863 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:32:33.847251  295863 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:32:33.847254  295863 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:32:33.847266  295863 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:32:33.847273  295863 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:32:33.847276  295863 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:32:33.847279  295863 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:32:33.847283  295863 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:32:33.847286  295863 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:32:33.847299  295863 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:32:33.847303  295863 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:32:33.847308  295863 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:32:33.847311  295863 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:32:33.847315  295863 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:32:33.847323  295863 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:32:33.847326  295863 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:32:33.847330  295863 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:32:33.847333  295863 cri.go:89] found id: ""
	I1101 09:32:33.847389  295863 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:32:33.862933  295863 out.go:203] 
	W1101 09:32:33.865997  295863 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:32:33.866044  295863 out.go:285] * 
	* 
	W1101 09:32:33.872539  295863 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:32:33.875551  295863 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (46.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-720971 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-720971 --alsologtostderr -v=1: exit status 11 (268.055295ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:44.542534  294096 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:44.543326  294096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:44.543347  294096 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:44.543353  294096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:44.543681  294096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:31:44.544083  294096 mustload.go:66] Loading cluster: addons-720971
	I1101 09:31:44.544505  294096 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:44.544527  294096 addons.go:607] checking whether the cluster is paused
	I1101 09:31:44.544725  294096 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:44.544746  294096 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:31:44.545251  294096 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:31:44.563920  294096 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:44.563981  294096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:31:44.582396  294096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:31:44.692382  294096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:44.692474  294096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:44.723875  294096 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:31:44.723906  294096 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:31:44.723912  294096 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:31:44.723948  294096 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:31:44.723952  294096 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:31:44.723956  294096 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:31:44.723960  294096 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:31:44.723963  294096 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:31:44.723966  294096 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:31:44.723973  294096 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:31:44.723976  294096 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:31:44.723979  294096 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:31:44.723982  294096 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:31:44.723986  294096 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:31:44.723989  294096 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:31:44.723997  294096 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:31:44.724000  294096 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:31:44.724007  294096 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:31:44.724010  294096 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:31:44.724014  294096 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:31:44.724018  294096 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:31:44.724023  294096 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:31:44.724026  294096 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:31:44.724029  294096 cri.go:89] found id: ""
	I1101 09:31:44.724079  294096 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:44.739858  294096 out.go:203] 
	W1101 09:31:44.742827  294096 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:44.742864  294096 out.go:285] * 
	* 
	W1101 09:31:44.749307  294096 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:44.752149  294096 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-720971 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-720971
helpers_test.go:243: (dbg) docker inspect addons-720971:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897",
	        "Created": "2025-11-01T09:29:10.230050376Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288288,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:29:10.289473763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/hostname",
	        "HostsPath": "/var/lib/docker/containers/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/hosts",
	        "LogPath": "/var/lib/docker/containers/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897-json.log",
	        "Name": "/addons-720971",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-720971:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-720971",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897",
	                "LowerDir": "/var/lib/docker/overlay2/d286f68b4f28ed1023c7f5e9bd2c2e248a7ae7cb8d0f1d21e3a2a542eb849ea7-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d286f68b4f28ed1023c7f5e9bd2c2e248a7ae7cb8d0f1d21e3a2a542eb849ea7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d286f68b4f28ed1023c7f5e9bd2c2e248a7ae7cb8d0f1d21e3a2a542eb849ea7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d286f68b4f28ed1023c7f5e9bd2c2e248a7ae7cb8d0f1d21e3a2a542eb849ea7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-720971",
	                "Source": "/var/lib/docker/volumes/addons-720971/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-720971",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-720971",
	                "name.minikube.sigs.k8s.io": "addons-720971",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8851979d0d22902f3cc4de6b037d1dfce977e54cb644d4edd54282862ae106ba",
	            "SandboxKey": "/var/run/docker/netns/8851979d0d22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-720971": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:01:6c:24:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5119f53304fa3253f3af8591ad05d5f56f09adc085fd05368b53e67c3ff3a7b",
	                    "EndpointID": "91180c4a56e50651e273def5c46a2c4ce882c462dfa5479f46dd306f3b137b94",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-720971",
	                        "490d904a357f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-720971 -n addons-720971
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-720971 logs -n 25: (1.535585185s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-632367 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-632367   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ delete  │ -p download-only-632367                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-632367   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ start   │ -o=json --download-only -p download-only-775162 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-775162   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ delete  │ -p download-only-775162                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-775162   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ delete  │ -p download-only-632367                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-632367   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ delete  │ -p download-only-775162                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-775162   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ start   │ --download-only -p download-docker-812096 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-812096 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p download-docker-812096                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-812096 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ start   │ --download-only -p binary-mirror-960233 --alsologtostderr --binary-mirror http://127.0.0.1:36239 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-960233   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p binary-mirror-960233                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-960233   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ addons  │ enable dashboard -p addons-720971                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-720971                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ start   │ -p addons-720971 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ addons-720971 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-720971 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-720971 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-720971          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:28:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:28:43.703595  287891 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:28:43.704151  287891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:43.704194  287891 out.go:374] Setting ErrFile to fd 2...
	I1101 09:28:43.704218  287891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:43.704543  287891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:28:43.705076  287891 out.go:368] Setting JSON to false
	I1101 09:28:43.705990  287891 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4273,"bootTime":1761985051,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:28:43.706095  287891 start.go:143] virtualization:  
	I1101 09:28:43.709314  287891 out.go:179] * [addons-720971] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:28:43.713167  287891 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:28:43.713253  287891 notify.go:221] Checking for updates...
	I1101 09:28:43.719125  287891 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:28:43.721896  287891 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:28:43.724669  287891 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:28:43.727695  287891 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:28:43.730618  287891 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:28:43.733810  287891 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:28:43.755264  287891 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:28:43.755403  287891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:43.818052  287891 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 09:28:43.809315097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:28:43.818164  287891 docker.go:319] overlay module found
	I1101 09:28:43.821322  287891 out.go:179] * Using the docker driver based on user configuration
	I1101 09:28:43.824181  287891 start.go:309] selected driver: docker
	I1101 09:28:43.824201  287891 start.go:930] validating driver "docker" against <nil>
	I1101 09:28:43.824215  287891 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:28:43.824902  287891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:43.886668  287891 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 09:28:43.876878567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:28:43.886825  287891 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:28:43.887054  287891 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:28:43.889927  287891 out.go:179] * Using Docker driver with root privileges
	I1101 09:28:43.892752  287891 cni.go:84] Creating CNI manager for ""
	I1101 09:28:43.892817  287891 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:28:43.892831  287891 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:28:43.892914  287891 start.go:353] cluster config:
	{Name:addons-720971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-720971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 09:28:43.896080  287891 out.go:179] * Starting "addons-720971" primary control-plane node in "addons-720971" cluster
	I1101 09:28:43.898959  287891 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:28:43.901942  287891 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:28:43.904803  287891 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:28:43.904862  287891 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:28:43.904874  287891 cache.go:59] Caching tarball of preloaded images
	I1101 09:28:43.904884  287891 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:28:43.904971  287891 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:28:43.904981  287891 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:28:43.905318  287891 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/config.json ...
	I1101 09:28:43.905337  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/config.json: {Name:mk964ea0c7b731f415496ba07e2cc0c6bc626b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:43.919803  287891 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:28:43.919932  287891 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:28:43.919958  287891 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 09:28:43.919962  287891 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 09:28:43.919971  287891 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 09:28:43.919976  287891 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 09:29:01.846282  287891 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 09:29:01.846318  287891 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:29:01.846347  287891 start.go:360] acquireMachinesLock for addons-720971: {Name:mkda075e3a51e16fadb53ae3d5bd1928997b2eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:29:01.847142  287891 start.go:364] duration metric: took 772.1µs to acquireMachinesLock for "addons-720971"
	I1101 09:29:01.847177  287891 start.go:93] Provisioning new machine with config: &{Name:addons-720971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-720971 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:29:01.847270  287891 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:29:01.850641  287891 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 09:29:01.850867  287891 start.go:159] libmachine.API.Create for "addons-720971" (driver="docker")
	I1101 09:29:01.850901  287891 client.go:173] LocalClient.Create starting
	I1101 09:29:01.851017  287891 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem
	I1101 09:29:02.328580  287891 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem
	I1101 09:29:03.485013  287891 cli_runner.go:164] Run: docker network inspect addons-720971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:29:03.501195  287891 cli_runner.go:211] docker network inspect addons-720971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:29:03.501293  287891 network_create.go:284] running [docker network inspect addons-720971] to gather additional debugging logs...
	I1101 09:29:03.501319  287891 cli_runner.go:164] Run: docker network inspect addons-720971
	W1101 09:29:03.518809  287891 cli_runner.go:211] docker network inspect addons-720971 returned with exit code 1
	I1101 09:29:03.518840  287891 network_create.go:287] error running [docker network inspect addons-720971]: docker network inspect addons-720971: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-720971 not found
	I1101 09:29:03.518855  287891 network_create.go:289] output of [docker network inspect addons-720971]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-720971 not found
	
	** /stderr **
	I1101 09:29:03.518950  287891 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:29:03.534841  287891 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d86470}
	I1101 09:29:03.534882  287891 network_create.go:124] attempt to create docker network addons-720971 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 09:29:03.534947  287891 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-720971 addons-720971
	I1101 09:29:03.592536  287891 network_create.go:108] docker network addons-720971 192.168.49.0/24 created
	I1101 09:29:03.592572  287891 kic.go:121] calculated static IP "192.168.49.2" for the "addons-720971" container
	I1101 09:29:03.592644  287891 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:29:03.606786  287891 cli_runner.go:164] Run: docker volume create addons-720971 --label name.minikube.sigs.k8s.io=addons-720971 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:29:03.624649  287891 oci.go:103] Successfully created a docker volume addons-720971
	I1101 09:29:03.624740  287891 cli_runner.go:164] Run: docker run --rm --name addons-720971-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-720971 --entrypoint /usr/bin/test -v addons-720971:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:29:05.750113  287891 cli_runner.go:217] Completed: docker run --rm --name addons-720971-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-720971 --entrypoint /usr/bin/test -v addons-720971:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.125329896s)
	I1101 09:29:05.750145  287891 oci.go:107] Successfully prepared a docker volume addons-720971
	I1101 09:29:05.750178  287891 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:05.750205  287891 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:29:05.750271  287891 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-720971:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:29:10.149410  287891 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-720971:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.399097549s)
	I1101 09:29:10.149463  287891 kic.go:203] duration metric: took 4.399248675s to extract preloaded images to volume ...
	W1101 09:29:10.149600  287891 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:29:10.149735  287891 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:29:10.214591  287891 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-720971 --name addons-720971 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-720971 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-720971 --network addons-720971 --ip 192.168.49.2 --volume addons-720971:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:29:10.509061  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Running}}
	I1101 09:29:10.528922  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:10.550590  287891 cli_runner.go:164] Run: docker exec addons-720971 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:29:10.602050  287891 oci.go:144] the created container "addons-720971" has a running status.
	I1101 09:29:10.602079  287891 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa...
	I1101 09:29:11.449205  287891 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:29:11.482302  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:11.498401  287891 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:29:11.498424  287891 kic_runner.go:114] Args: [docker exec --privileged addons-720971 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:29:11.541807  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:11.560503  287891 machine.go:94] provisionDockerMachine start ...
	I1101 09:29:11.560606  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:11.577015  287891 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:11.577334  287891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1101 09:29:11.577344  287891 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:29:11.577928  287891 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53400->127.0.0.1:33139: read: connection reset by peer
	I1101 09:29:14.725273  287891 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-720971
	
	I1101 09:29:14.725298  287891 ubuntu.go:182] provisioning hostname "addons-720971"
	I1101 09:29:14.725364  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:14.742620  287891 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:14.742952  287891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1101 09:29:14.742969  287891 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-720971 && echo "addons-720971" | sudo tee /etc/hostname
	I1101 09:29:14.898796  287891 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-720971
	
	I1101 09:29:14.898881  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:14.918020  287891 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:14.918322  287891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1101 09:29:14.918343  287891 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-720971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-720971/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-720971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:29:15.070212  287891 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:29:15.070238  287891 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:29:15.070264  287891 ubuntu.go:190] setting up certificates
	I1101 09:29:15.070275  287891 provision.go:84] configureAuth start
	I1101 09:29:15.070338  287891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-720971
	I1101 09:29:15.088811  287891 provision.go:143] copyHostCerts
	I1101 09:29:15.088904  287891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:29:15.089040  287891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:29:15.089107  287891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:29:15.089165  287891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.addons-720971 san=[127.0.0.1 192.168.49.2 addons-720971 localhost minikube]
	I1101 09:29:15.505475  287891 provision.go:177] copyRemoteCerts
	I1101 09:29:15.505545  287891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:29:15.505589  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:15.523685  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:15.629731  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:29:15.647433  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:29:15.665051  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:29:15.682514  287891 provision.go:87] duration metric: took 612.224341ms to configureAuth
	I1101 09:29:15.682542  287891 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:29:15.682766  287891 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:15.682878  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:15.699734  287891 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:15.700040  287891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1101 09:29:15.700061  287891 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:29:15.953308  287891 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:29:15.953333  287891 machine.go:97] duration metric: took 4.392806447s to provisionDockerMachine
	I1101 09:29:15.953343  287891 client.go:176] duration metric: took 14.102432735s to LocalClient.Create
	I1101 09:29:15.953356  287891 start.go:167] duration metric: took 14.102490583s to libmachine.API.Create "addons-720971"
	I1101 09:29:15.953363  287891 start.go:293] postStartSetup for "addons-720971" (driver="docker")
	I1101 09:29:15.953374  287891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:29:15.953440  287891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:29:15.953490  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:15.970620  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:16.078247  287891 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:29:16.081798  287891 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:29:16.081829  287891 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:29:16.081844  287891 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:29:16.081930  287891 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:29:16.081959  287891 start.go:296] duration metric: took 128.589353ms for postStartSetup
	I1101 09:29:16.082285  287891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-720971
	I1101 09:29:16.099481  287891 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/config.json ...
	I1101 09:29:16.099770  287891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:29:16.099824  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:16.116654  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:16.222725  287891 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:29:16.227326  287891 start.go:128] duration metric: took 14.380039259s to createHost
	I1101 09:29:16.227354  287891 start.go:83] releasing machines lock for "addons-720971", held for 14.380196359s
	I1101 09:29:16.227427  287891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-720971
	I1101 09:29:16.244362  287891 ssh_runner.go:195] Run: cat /version.json
	I1101 09:29:16.244431  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:16.244685  287891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:29:16.244747  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:16.265957  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:16.268831  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:16.369519  287891 ssh_runner.go:195] Run: systemctl --version
	I1101 09:29:16.460628  287891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:29:16.497411  287891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:29:16.502100  287891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:29:16.502190  287891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:29:16.531382  287891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:29:16.531458  287891 start.go:496] detecting cgroup driver to use...
	I1101 09:29:16.531506  287891 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:29:16.531570  287891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:29:16.548159  287891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:29:16.561235  287891 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:29:16.561319  287891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:29:16.578966  287891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:29:16.597250  287891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:29:16.708353  287891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:29:16.831964  287891 docker.go:234] disabling docker service ...
	I1101 09:29:16.832084  287891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:29:16.853099  287891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:29:16.866038  287891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:29:16.974542  287891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:29:17.098053  287891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:29:17.110998  287891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:29:17.125115  287891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:29:17.125181  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.133903  287891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:29:17.133968  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.142822  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.151622  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.160659  287891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:29:17.168771  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.177139  287891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.189946  287891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:17.198280  287891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:29:17.205835  287891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:29:17.213035  287891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:17.323636  287891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:29:17.446733  287891 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:29:17.446891  287891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:29:17.450960  287891 start.go:564] Will wait 60s for crictl version
	I1101 09:29:17.451070  287891 ssh_runner.go:195] Run: which crictl
	I1101 09:29:17.454461  287891 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:29:17.479480  287891 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:29:17.479633  287891 ssh_runner.go:195] Run: crio --version
	I1101 09:29:17.512441  287891 ssh_runner.go:195] Run: crio --version
	I1101 09:29:17.542824  287891 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:29:17.545635  287891 cli_runner.go:164] Run: docker network inspect addons-720971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:29:17.561748  287891 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:29:17.565677  287891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:29:17.575955  287891 kubeadm.go:884] updating cluster {Name:addons-720971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-720971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:29:17.576078  287891 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:17.576137  287891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:29:17.612405  287891 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:29:17.612429  287891 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:29:17.612483  287891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:29:17.636855  287891 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:29:17.636879  287891 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:29:17.636887  287891 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:29:17.636970  287891 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-720971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-720971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:29:17.637052  287891 ssh_runner.go:195] Run: crio config
	I1101 09:29:17.708167  287891 cni.go:84] Creating CNI manager for ""
	I1101 09:29:17.708189  287891 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:29:17.708208  287891 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:29:17.708233  287891 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-720971 NodeName:addons-720971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:29:17.708369  287891 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-720971"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:29:17.708442  287891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:29:17.715897  287891 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:29:17.715966  287891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:29:17.723547  287891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:29:17.736286  287891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:29:17.749364  287891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1101 09:29:17.761979  287891 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:29:17.765311  287891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:29:17.774401  287891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:17.890507  287891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:29:17.907207  287891 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971 for IP: 192.168.49.2
	I1101 09:29:17.907278  287891 certs.go:195] generating shared ca certs ...
	I1101 09:29:17.907309  287891 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:17.907470  287891 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:29:18.489440  287891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt ...
	I1101 09:29:18.489475  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt: {Name:mk898dc43af82dfa9231d0fc36cb33f84849bbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:18.489682  287891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key ...
	I1101 09:29:18.489713  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key: {Name:mka703e411a1c87bad1de809149144253920e01f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:18.489813  287891 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:29:18.894654  287891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt ...
	I1101 09:29:18.894685  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt: {Name:mkc9777fdbfd77c8972d0c36c45bdb2e6f0cac10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:18.895558  287891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key ...
	I1101 09:29:18.895578  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key: {Name:mk68326f0acf23235e1be5f28012de152996722a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:18.895667  287891 certs.go:257] generating profile certs ...
	I1101 09:29:18.895728  287891 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.key
	I1101 09:29:18.895745  287891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt with IP's: []
	I1101 09:29:19.255672  287891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt ...
	I1101 09:29:19.255705  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: {Name:mk594dc5e6a47adfd22abde413a6bc58a616786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:19.256512  287891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.key ...
	I1101 09:29:19.256529  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.key: {Name:mk4a7c8ac0a94db5b133b743f0a9e3cc97090ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:19.257270  287891 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key.0b65546a
	I1101 09:29:19.257301  287891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt.0b65546a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 09:29:19.551187  287891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt.0b65546a ...
	I1101 09:29:19.551220  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt.0b65546a: {Name:mk67928a98e2c7e5fa55dadde3e91a337b63d08f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:19.551408  287891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key.0b65546a ...
	I1101 09:29:19.551422  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key.0b65546a: {Name:mk12924766dcee21229beb49a3ba49a59e57dc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:19.552111  287891 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt.0b65546a -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt
	I1101 09:29:19.552200  287891 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key.0b65546a -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key
	I1101 09:29:19.552258  287891 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.key
	I1101 09:29:19.552281  287891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.crt with IP's: []
	I1101 09:29:20.038963  287891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.crt ...
	I1101 09:29:20.038996  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.crt: {Name:mk26ce0a562ab7b4a5540e2d463ef07ef7e2ee37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:20.039854  287891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.key ...
	I1101 09:29:20.039875  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.key: {Name:mk0a7ad61a66a7bb7bbefb5ac9cdaac9e341c325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:20.040723  287891 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:29:20.040770  287891 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:29:20.040801  287891 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:29:20.040828  287891 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:29:20.041392  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:29:20.061507  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:29:20.081613  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:29:20.100971  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:29:20.119229  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:29:20.137847  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:29:20.157728  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:29:20.178134  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:29:20.196349  287891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:29:20.215958  287891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:29:20.228918  287891 ssh_runner.go:195] Run: openssl version
	I1101 09:29:20.235491  287891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:29:20.244038  287891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:20.247878  287891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:20.247943  287891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:20.288815  287891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:29:20.296929  287891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:29:20.300295  287891 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:29:20.300345  287891 kubeadm.go:401] StartCluster: {Name:addons-720971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-720971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:29:20.300426  287891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:29:20.300491  287891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:29:20.328443  287891 cri.go:89] found id: ""
	I1101 09:29:20.328525  287891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:29:20.336165  287891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:29:20.343830  287891 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:29:20.343950  287891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:29:20.351742  287891 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:29:20.351767  287891 kubeadm.go:158] found existing configuration files:
	
	I1101 09:29:20.351817  287891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:29:20.359528  287891 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:29:20.359642  287891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:29:20.366734  287891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:29:20.373979  287891 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:29:20.374042  287891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:29:20.381193  287891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:29:20.388590  287891 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:29:20.388664  287891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:29:20.395756  287891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:29:20.403183  287891 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:29:20.403297  287891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:29:20.410375  287891 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:29:20.448409  287891 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:29:20.448511  287891 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:29:20.478838  287891 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:29:20.478917  287891 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:29:20.478958  287891 kubeadm.go:319] OS: Linux
	I1101 09:29:20.479010  287891 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:29:20.479067  287891 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:29:20.479120  287891 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:29:20.479174  287891 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:29:20.479228  287891 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:29:20.479281  287891 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:29:20.479332  287891 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:29:20.479393  287891 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:29:20.479445  287891 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:29:20.542861  287891 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:29:20.543060  287891 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:29:20.543200  287891 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:29:20.550410  287891 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:29:20.554687  287891 out.go:252]   - Generating certificates and keys ...
	I1101 09:29:20.554880  287891 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:29:20.555019  287891 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:29:21.068063  287891 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:29:21.414777  287891 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:29:21.838592  287891 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:29:22.356176  287891 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:29:22.721627  287891 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:29:22.722044  287891 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-720971 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:29:23.072337  287891 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:29:23.072676  287891 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-720971 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:29:23.258085  287891 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:29:23.674734  287891 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:29:23.910846  287891 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:29:23.911389  287891 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:29:24.469718  287891 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:29:24.610830  287891 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:29:25.256448  287891 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:29:26.202512  287891 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:29:26.607998  287891 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:29:26.608616  287891 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:29:26.613560  287891 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:29:26.616881  287891 out.go:252]   - Booting up control plane ...
	I1101 09:29:26.616996  287891 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:29:26.617087  287891 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:29:26.617935  287891 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:29:26.633077  287891 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:29:26.633624  287891 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:29:26.641190  287891 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:29:26.641509  287891 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:29:26.641558  287891 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:29:26.766136  287891 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:29:26.766260  287891 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:29:28.263928  287891 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501631414s
	I1101 09:29:28.268262  287891 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:29:28.268362  287891 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 09:29:28.268619  287891 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:29:28.268708  287891 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:29:32.038647  287891 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.769187537s
	I1101 09:29:34.106338  287891 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.837372272s
	I1101 09:29:34.770680  287891 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501483318s
	I1101 09:29:34.789900  287891 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:29:34.807446  287891 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:29:34.829183  287891 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:29:34.829439  287891 kubeadm.go:319] [mark-control-plane] Marking the node addons-720971 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:29:34.851487  287891 kubeadm.go:319] [bootstrap-token] Using token: s773yf.tbd4dhvjfsergipt
	I1101 09:29:34.854762  287891 out.go:252]   - Configuring RBAC rules ...
	I1101 09:29:34.854894  287891 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:29:34.861136  287891 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:29:34.869685  287891 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:29:34.877093  287891 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:29:34.881394  287891 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:29:34.887263  287891 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:29:35.178240  287891 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:29:35.608069  287891 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:29:36.180510  287891 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:29:36.181463  287891 kubeadm.go:319] 
	I1101 09:29:36.181536  287891 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:29:36.181542  287891 kubeadm.go:319] 
	I1101 09:29:36.181623  287891 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:29:36.181627  287891 kubeadm.go:319] 
	I1101 09:29:36.181653  287891 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:29:36.181726  287891 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:29:36.181782  287891 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:29:36.181786  287891 kubeadm.go:319] 
	I1101 09:29:36.181842  287891 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:29:36.181847  287891 kubeadm.go:319] 
	I1101 09:29:36.181897  287891 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:29:36.181901  287891 kubeadm.go:319] 
	I1101 09:29:36.181956  287891 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:29:36.182033  287891 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:29:36.182111  287891 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:29:36.182117  287891 kubeadm.go:319] 
	I1101 09:29:36.182204  287891 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:29:36.182284  287891 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:29:36.182289  287891 kubeadm.go:319] 
	I1101 09:29:36.182376  287891 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s773yf.tbd4dhvjfsergipt \
	I1101 09:29:36.182483  287891 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 09:29:36.182507  287891 kubeadm.go:319] 	--control-plane 
	I1101 09:29:36.182511  287891 kubeadm.go:319] 
	I1101 09:29:36.182611  287891 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:29:36.182617  287891 kubeadm.go:319] 
	I1101 09:29:36.182702  287891 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s773yf.tbd4dhvjfsergipt \
	I1101 09:29:36.182808  287891 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 09:29:36.186434  287891 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 09:29:36.186674  287891 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:29:36.186787  287891 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:29:36.186806  287891 cni.go:84] Creating CNI manager for ""
	I1101 09:29:36.186814  287891 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:29:36.191906  287891 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:29:36.194800  287891 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:29:36.198724  287891 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:29:36.198745  287891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:29:36.212443  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:29:36.511063  287891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:29:36.511218  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:36.511285  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-720971 minikube.k8s.io/updated_at=2025_11_01T09_29_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=addons-720971 minikube.k8s.io/primary=true
	I1101 09:29:36.656824  287891 ops.go:34] apiserver oom_adj: -16
	I1101 09:29:36.656926  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:37.157224  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:37.658033  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:38.157873  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:38.657039  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:39.157430  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:39.657135  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:40.157102  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:40.657512  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:41.157066  287891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:41.280174  287891 kubeadm.go:1114] duration metric: took 4.769019595s to wait for elevateKubeSystemPrivileges
	I1101 09:29:41.280208  287891 kubeadm.go:403] duration metric: took 20.97986616s to StartCluster
	I1101 09:29:41.280227  287891 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:41.280958  287891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:29:41.281426  287891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:41.281625  287891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:29:41.281656  287891 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:29:41.281912  287891 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:41.281943  287891 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:29:41.282020  287891 addons.go:70] Setting yakd=true in profile "addons-720971"
	I1101 09:29:41.282035  287891 addons.go:239] Setting addon yakd=true in "addons-720971"
	I1101 09:29:41.282057  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.282117  287891 addons.go:70] Setting inspektor-gadget=true in profile "addons-720971"
	I1101 09:29:41.282140  287891 addons.go:239] Setting addon inspektor-gadget=true in "addons-720971"
	I1101 09:29:41.282162  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.282513  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.282572  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.282947  287891 addons.go:70] Setting metrics-server=true in profile "addons-720971"
	I1101 09:29:41.282970  287891 addons.go:239] Setting addon metrics-server=true in "addons-720971"
	I1101 09:29:41.282993  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.283440  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.286113  287891 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-720971"
	I1101 09:29:41.286366  287891 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-720971"
	I1101 09:29:41.286436  287891 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-720971"
	I1101 09:29:41.286450  287891 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-720971"
	I1101 09:29:41.286476  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.286926  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.287182  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.288265  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.286269  287891 addons.go:70] Setting cloud-spanner=true in profile "addons-720971"
	I1101 09:29:41.286278  287891 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-720971"
	I1101 09:29:41.286282  287891 addons.go:70] Setting default-storageclass=true in profile "addons-720971"
	I1101 09:29:41.286286  287891 addons.go:70] Setting gcp-auth=true in profile "addons-720971"
	I1101 09:29:41.286289  287891 addons.go:70] Setting ingress=true in profile "addons-720971"
	I1101 09:29:41.286293  287891 addons.go:70] Setting ingress-dns=true in profile "addons-720971"
	I1101 09:29:41.292052  287891 out.go:179] * Verifying Kubernetes components...
	I1101 09:29:41.297159  287891 addons.go:70] Setting registry=true in profile "addons-720971"
	I1101 09:29:41.297279  287891 addons.go:239] Setting addon registry=true in "addons-720971"
	I1101 09:29:41.297320  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.297190  287891 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-720971"
	I1101 09:29:41.297930  287891 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-720971"
	I1101 09:29:41.298288  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.298560  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.297175  287891 addons.go:70] Setting registry-creds=true in profile "addons-720971"
	I1101 09:29:41.314586  287891 addons.go:239] Setting addon registry-creds=true in "addons-720971"
	I1101 09:29:41.314628  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.315089  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.297184  287891 addons.go:70] Setting storage-provisioner=true in profile "addons-720971"
	I1101 09:29:41.316136  287891 addons.go:239] Setting addon storage-provisioner=true in "addons-720971"
	I1101 09:29:41.316166  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.316596  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.314481  287891 addons.go:70] Setting volcano=true in profile "addons-720971"
	I1101 09:29:41.321766  287891 addons.go:239] Setting addon volcano=true in "addons-720971"
	I1101 09:29:41.321808  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.322267  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.328122  287891 addons.go:239] Setting addon cloud-spanner=true in "addons-720971"
	I1101 09:29:41.328174  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.328647  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.314498  287891 addons.go:70] Setting volumesnapshots=true in profile "addons-720971"
	I1101 09:29:41.341876  287891 addons.go:239] Setting addon volumesnapshots=true in "addons-720971"
	I1101 09:29:41.341916  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.342390  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.350695  287891 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-720971"
	I1101 09:29:41.350877  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.351780  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.370787  287891 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-720971"
	I1101 09:29:41.371332  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.393974  287891 mustload.go:66] Loading cluster: addons-720971
	I1101 09:29:41.394221  287891 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:41.394532  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.415363  287891 addons.go:239] Setting addon ingress=true in "addons-720971"
	I1101 09:29:41.415453  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.416024  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.443831  287891 addons.go:239] Setting addon ingress-dns=true in "addons-720971"
	I1101 09:29:41.443897  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.444453  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.494249  287891 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:29:41.520109  287891 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:29:41.524076  287891 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:29:41.524148  287891 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:29:41.524267  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.534215  287891 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:29:41.537514  287891 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:29:41.537536  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:29:41.537607  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.556990  287891 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 09:29:41.559173  287891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:41.559273  287891 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:29:41.563441  287891 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:29:41.564493  287891 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:29:41.564513  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:29:41.564579  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.564763  287891 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:29:41.564773  287891 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:29:41.564809  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.577095  287891 addons.go:239] Setting addon default-storageclass=true in "addons-720971"
	I1101 09:29:41.577136  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.577748  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.580298  287891 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:29:41.580318  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:29:41.580381  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.585923  287891 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:29:41.586016  287891 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:29:41.586390  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.591514  287891 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:29:41.591785  287891 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:29:41.596012  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:29:41.596179  287891 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:29:41.599300  287891 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-720971"
	I1101 09:29:41.599343  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.599759  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:41.615692  287891 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:29:41.615717  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:29:41.615774  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.632297  287891 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 09:29:41.634221  287891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:29:41.634665  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:41.635520  287891 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:29:41.635538  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:29:41.635638  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.636431  287891 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:29:41.639386  287891 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:29:41.639421  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:29:41.639485  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.648066  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:29:41.648089  287891 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:29:41.648169  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.659826  287891 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:41.663599  287891 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:41.666623  287891 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:29:41.666645  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:29:41.666709  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.689164  287891 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:29:41.695431  287891 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:29:41.695459  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:29:41.695527  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	W1101 09:29:41.713722  287891 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:29:41.714936  287891 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:29:41.714953  287891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:29:41.715020  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.733386  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:29:41.736511  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:29:41.741325  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:29:41.744140  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:29:41.750339  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:29:41.753996  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.754921  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.759503  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:29:41.762371  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:29:41.804909  287891 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:29:41.810026  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.811953  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:29:41.812027  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:29:41.812128  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.821957  287891 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:29:41.826101  287891 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:29:41.829942  287891 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:29:41.830020  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:29:41.830129  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:41.852364  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.852819  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.853820  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.881481  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.891710  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.895862  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.929858  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.930022  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.939853  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	W1101 09:29:41.952405  287891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:29:41.952441  287891 retry.go:31] will retry after 170.670867ms: ssh: handshake failed: EOF
	I1101 09:29:41.955440  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.966579  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:41.969995  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	W1101 09:29:41.972748  287891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:29:41.972770  287891 retry.go:31] will retry after 180.891157ms: ssh: handshake failed: EOF
	I1101 09:29:42.074669  287891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1101 09:29:42.155038  287891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:29:42.155133  287891 retry.go:31] will retry after 494.620939ms: ssh: handshake failed: EOF
	I1101 09:29:42.422795  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:29:42.500023  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:29:42.506637  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:29:42.538630  287891 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:29:42.538655  287891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:29:42.591941  287891 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:29:42.591964  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:29:42.606057  287891 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:29:42.606082  287891 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:29:42.623995  287891 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:29:42.624021  287891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:29:42.638371  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:29:42.707525  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:29:42.710221  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:29:42.711502  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:29:42.713495  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:29:42.725913  287891 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:29:42.725938  287891 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:29:42.733754  287891 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:29:42.733780  287891 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:29:42.763328  287891 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:42.763352  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:29:42.765618  287891 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:29:42.765639  287891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:29:42.770367  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:29:42.779034  287891 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:29:42.779058  287891 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:29:42.884552  287891 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:29:42.884578  287891 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:29:42.887096  287891 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:29:42.887121  287891 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:29:42.901324  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:29:42.901349  287891 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:29:42.931899  287891 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:29:42.931921  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:29:42.933401  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:43.075359  287891 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:29:43.075382  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:29:43.091874  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:29:43.108598  287891 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:29:43.108623  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:29:43.151382  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:29:43.185164  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:29:43.211653  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:29:43.239090  287891 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.164386951s)
	I1101 09:29:43.239846  287891 node_ready.go:35] waiting up to 6m0s for node "addons-720971" to be "Ready" ...
	I1101 09:29:43.240072  287891 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.605817405s)
	I1101 09:29:43.240093  287891 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 09:29:43.397226  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:29:43.397261  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:29:43.517411  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.09457234s)
	I1101 09:29:43.671860  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:29:43.671930  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:29:43.748097  287891 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-720971" context rescaled to 1 replicas
	I1101 09:29:43.778415  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.278352576s)
	I1101 09:29:43.927951  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:29:43.927976  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:29:44.191170  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:29:44.191196  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:29:44.442120  287891 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:29:44.442148  287891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:29:44.633186  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:29:44.633210  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:29:44.848755  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:29:44.848781  287891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:29:45.077024  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:29:45.077053  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:29:45.203919  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:29:45.203947  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:29:45.215833  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.577424007s)
	I1101 09:29:45.215900  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.508352193s)
	I1101 09:29:45.216170  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.709506378s)
	W1101 09:29:45.243414  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:45.247527  287891 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:29:45.247558  287891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:29:45.337813  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:29:45.411700  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.701443155s)
	I1101 09:29:46.281086  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.569549011s)
	W1101 09:29:47.245870  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:47.422514  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.489079921s)
	W1101 09:29:47.422551  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:47.422570  287891 retry.go:31] will retry after 243.431697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:47.422630  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.330731092s)
	I1101 09:29:47.422646  287891 addons.go:480] Verifying addon metrics-server=true in "addons-720971"
	I1101 09:29:47.422677  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.271271252s)
	I1101 09:29:47.422694  287891 addons.go:480] Verifying addon registry=true in "addons-720971"
	I1101 09:29:47.422865  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.652030813s)
	I1101 09:29:47.423183  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.237987719s)
	I1101 09:29:47.423355  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.211671461s)
	W1101 09:29:47.423389  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:29:47.423405  287891 retry.go:31] will retry after 222.993155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:29:47.423551  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.710033172s)
	I1101 09:29:47.423566  287891 addons.go:480] Verifying addon ingress=true in "addons-720971"
	I1101 09:29:47.426178  287891 out.go:179] * Verifying registry addon...
	I1101 09:29:47.426229  287891 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-720971 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:29:47.428223  287891 out.go:179] * Verifying ingress addon...
	I1101 09:29:47.431693  287891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:29:47.431797  287891 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:29:47.440472  287891 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:29:47.440502  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:47.441008  287891 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:29:47.441030  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:47.647203  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:29:47.658419  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.320555674s)
	I1101 09:29:47.658456  287891 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-720971"
	I1101 09:29:47.662094  287891 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:29:47.665680  287891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:29:47.666088  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:47.686838  287891 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:29:47.686867  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:47.936965  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:47.937101  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:48.179294  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:48.436013  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:48.436535  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:48.669675  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:48.723548  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.057427018s)
	W1101 09:29:48.723582  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:48.723602  287891 retry.go:31] will retry after 561.602325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:48.935868  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:48.936124  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:49.169489  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:49.269070  287891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:29:49.269171  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:49.285777  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:49.287029  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:49.406273  287891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:29:49.422815  287891 addons.go:239] Setting addon gcp-auth=true in "addons-720971"
	I1101 09:29:49.422861  287891 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:29:49.423317  287891 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:29:49.437185  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:49.437271  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:49.444200  287891 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:29:49.444257  287891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:29:49.462647  287891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:29:49.669124  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:49.743165  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:49.936038  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:49.936112  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:50.113229  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:50.113313  287891 retry.go:31] will retry after 836.696112ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:50.117205  287891 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:29:50.120107  287891 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:50.122873  287891 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:29:50.122904  287891 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:29:50.137922  287891 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:29:50.137946  287891 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:29:50.151723  287891 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:29:50.151749  287891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:29:50.165655  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:29:50.169830  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:50.435848  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:50.436467  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:50.655819  287891 addons.go:480] Verifying addon gcp-auth=true in "addons-720971"
	I1101 09:29:50.659722  287891 out.go:179] * Verifying gcp-auth addon...
	I1101 09:29:50.664709  287891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:29:50.683021  287891 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:29:50.683045  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:50.687414  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:50.935689  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:50.936169  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:50.951177  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:51.170001  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:51.170113  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:51.437050  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:51.437425  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:51.673762  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:51.674449  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:51.764649  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:51.764687  287891 retry.go:31] will retry after 948.865158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:51.935549  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:51.935707  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:52.167894  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:52.168853  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:52.243545  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:52.435604  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:52.435693  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:52.668697  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:52.669495  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:52.714692  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:52.936738  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:52.937078  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:53.170520  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:53.172035  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:53.436769  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:53.437045  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:29:53.523497  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:53.523531  287891 retry.go:31] will retry after 945.858273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:53.669134  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:53.669324  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:53.935357  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:53.935717  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:54.167595  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:54.168379  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:54.435685  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:54.435794  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:54.469986  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:54.672515  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:54.672596  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:54.743666  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:54.936293  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:54.937036  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:55.168386  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:55.168846  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:55.291948  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:55.291987  287891 retry.go:31] will retry after 1.260772996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:55.434811  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:55.435424  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:55.668805  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:55.669311  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:55.935499  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:55.935650  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:56.168446  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:56.168914  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:56.435087  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:56.435231  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:56.553611  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:56.672621  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:56.673355  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:56.935787  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:56.936213  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:57.167773  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:57.168522  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:57.242538  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	W1101 09:29:57.346403  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:57.346472  287891 retry.go:31] will retry after 1.684425992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:57.436101  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:57.436231  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:57.669060  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:57.669419  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:57.935671  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:57.935811  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:58.167667  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:58.168887  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:58.434653  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:58.435043  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:58.668644  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:58.669377  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:58.935489  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:58.935990  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:59.031084  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:59.170307  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:59.170572  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:59.259409  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:29:59.436983  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:59.437349  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:59.673813  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:59.673957  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:59.858466  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:59.858496  287891 retry.go:31] will retry after 3.168392768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:59.942013  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:59.942258  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:00.191674  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:00.191830  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:00.466376  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:00.469259  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:00.672728  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:00.673135  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:00.935791  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:00.936440  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:01.170147  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:01.171493  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:01.436241  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:01.436465  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:01.669384  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:01.669570  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:01.743514  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:01.935589  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:01.935656  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:02.167411  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:02.168689  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:02.435625  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:02.436216  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:02.667966  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:02.669122  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:02.935118  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:02.935706  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:03.027938  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:03.169566  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:03.169726  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:03.436129  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:03.437296  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:03.670826  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:03.671006  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:03.743657  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	W1101 09:30:03.873883  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:03.873917  287891 retry.go:31] will retry after 5.89836222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:03.935669  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:03.935808  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:04.168500  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:04.168567  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:04.434991  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:04.435471  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:04.672089  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:04.672192  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:04.936105  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:04.936361  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:05.169115  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:05.169196  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:05.435283  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:05.435446  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:05.668810  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:05.669291  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:05.935797  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:05.936336  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:06.169106  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:06.169247  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:30:06.243073  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:06.435887  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:06.436099  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:06.667613  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:06.668768  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:06.935694  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:06.936036  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:07.168141  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:07.168954  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:07.435087  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:07.435312  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:07.673318  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:07.673490  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:07.935906  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:07.935983  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:08.167603  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:08.168753  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:08.243767  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:08.435051  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:08.435379  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:08.668710  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:08.669503  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:08.934853  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:08.935228  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:09.168256  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:09.169490  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:09.435966  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:09.436085  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:09.668340  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:09.669245  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:09.773384  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:09.943394  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:09.945151  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:10.170169  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:10.170324  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:10.437117  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:10.437515  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:30:10.599841  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:10.599875  287891 retry.go:31] will retry after 10.207833999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:10.668181  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:10.668716  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:10.743713  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:10.935115  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:10.935362  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:11.170330  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:11.171164  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:11.435023  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:11.436347  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:11.667305  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:11.668378  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:11.935278  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:11.935956  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:12.169144  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:12.169768  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:12.435760  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:12.436300  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:12.669366  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:12.669514  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:12.935340  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:12.935774  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:13.167402  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:13.168335  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:13.243207  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:13.435491  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:13.435703  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:13.667511  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:13.668401  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:13.937080  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:13.937184  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:14.167844  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:14.168925  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:14.435737  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:14.435909  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:14.668942  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:14.669104  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:14.935427  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:14.935827  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:15.167828  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:15.168871  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:15.435040  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:15.435242  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:15.669029  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:15.669280  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:30:15.743278  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:15.935547  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:15.935978  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:16.167933  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:16.169231  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:16.435715  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:16.436085  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:16.668879  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:16.669424  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:16.935723  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:16.936056  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:17.167911  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:17.169223  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:17.435535  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:17.435688  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:17.667876  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:17.668772  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:17.935116  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:17.935306  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:18.168833  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:18.169223  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:18.243028  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:18.435599  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:18.435922  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:18.667531  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:18.667823  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:18.934768  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:18.935422  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:19.168403  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:19.168935  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:19.435780  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:19.436138  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:19.667749  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:19.669398  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:19.935477  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:19.935689  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:20.167712  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:20.168919  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:20.243396  287891 node_ready.go:57] node "addons-720971" has "Ready":"False" status (will retry)
	I1101 09:30:20.435954  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:20.436138  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:20.668103  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:20.668565  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:20.808082  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:20.936739  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:20.937088  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:21.168652  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:21.169529  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:21.436221  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:21.437083  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:21.669992  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:30:21.671073  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:21.671137  287891 retry.go:31] will retry after 18.178879218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:21.671733  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:21.936068  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:21.936201  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:22.177166  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:22.181558  287891 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:30:22.181583  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:22.248010  287891 node_ready.go:49] node "addons-720971" is "Ready"
	I1101 09:30:22.248041  287891 node_ready.go:38] duration metric: took 39.008163158s for node "addons-720971" to be "Ready" ...
	I1101 09:30:22.248064  287891 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:30:22.248136  287891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:30:22.270188  287891 api_server.go:72] duration metric: took 40.988489149s to wait for apiserver process to appear ...
	I1101 09:30:22.270218  287891 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:30:22.270237  287891 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 09:30:22.282789  287891 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 09:30:22.293181  287891 api_server.go:141] control plane version: v1.34.1
	I1101 09:30:22.293213  287891 api_server.go:131] duration metric: took 22.988366ms to wait for apiserver health ...
	I1101 09:30:22.293223  287891 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:30:22.358993  287891 system_pods.go:59] 19 kube-system pods found
	I1101 09:30:22.359029  287891 system_pods.go:61] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending
	I1101 09:30:22.359036  287891 system_pods.go:61] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending
	I1101 09:30:22.359040  287891 system_pods.go:61] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending
	I1101 09:30:22.359045  287891 system_pods.go:61] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending
	I1101 09:30:22.359050  287891 system_pods.go:61] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:22.359055  287891 system_pods.go:61] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:22.359059  287891 system_pods.go:61] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:22.359064  287891 system_pods.go:61] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:22.359068  287891 system_pods.go:61] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending
	I1101 09:30:22.359074  287891 system_pods.go:61] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:22.359079  287891 system_pods.go:61] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:22.359088  287891 system_pods.go:61] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:22.359101  287891 system_pods.go:61] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending
	I1101 09:30:22.359107  287891 system_pods.go:61] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending
	I1101 09:30:22.359114  287891 system_pods.go:61] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:22.359122  287891 system_pods.go:61] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending
	I1101 09:30:22.359128  287891 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending
	I1101 09:30:22.359136  287891 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.359146  287891 system_pods.go:61] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:22.359154  287891 system_pods.go:74] duration metric: took 65.923954ms to wait for pod list to return data ...
	I1101 09:30:22.359164  287891 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:30:22.369363  287891 default_sa.go:45] found service account: "default"
	I1101 09:30:22.369390  287891 default_sa.go:55] duration metric: took 10.217342ms for default service account to be created ...
	I1101 09:30:22.369400  287891 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:30:22.401552  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:22.401593  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending
	I1101 09:30:22.401599  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending
	I1101 09:30:22.401603  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending
	I1101 09:30:22.401608  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending
	I1101 09:30:22.401612  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:22.401616  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:22.401621  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:22.401625  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:22.401629  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending
	I1101 09:30:22.401633  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:22.401637  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:22.401654  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:22.401664  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending
	I1101 09:30:22.401670  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending
	I1101 09:30:22.401676  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:22.401750  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending
	I1101 09:30:22.401765  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.401773  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.401788  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:22.401810  287891 retry.go:31] will retry after 195.705468ms: missing components: kube-dns
	I1101 09:30:22.469840  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:22.469917  287891 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:30:22.469931  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:22.601827  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:22.601872  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:22.601879  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending
	I1101 09:30:22.601890  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:22.601897  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:22.601902  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:22.601907  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:22.601911  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:22.601921  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:22.601926  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending
	I1101 09:30:22.601930  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:22.601934  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:22.601953  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:22.601965  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending
	I1101 09:30:22.601972  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:22.601986  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:22.601991  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending
	I1101 09:30:22.601997  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.602007  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.602035  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:22.602054  287891 retry.go:31] will retry after 269.428383ms: missing components: kube-dns
	I1101 09:30:22.675155  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:22.679229  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:22.880157  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:22.880203  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:22.880214  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:22.880222  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:22.880230  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:22.880241  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:22.880252  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:22.880257  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:22.880275  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:22.880283  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:22.880292  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:22.880297  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:22.880303  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:22.880313  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:22.880322  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:22.880331  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:22.880338  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:22.880357  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.880365  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:22.880376  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:22.880392  287891 retry.go:31] will retry after 334.275735ms: missing components: kube-dns
	I1101 09:30:22.976734  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:22.977065  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:23.178462  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:23.178621  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:23.291237  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:23.291285  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:23.291295  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:23.291303  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:23.291311  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:23.291320  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:23.291326  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:23.291339  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:23.291352  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:23.291372  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:23.291376  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:23.291387  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:23.291394  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:23.291401  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:23.291409  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:23.291416  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:23.291433  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:23.291440  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:23.291455  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:23.291462  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:23.291482  287891 retry.go:31] will retry after 513.832273ms: missing components: kube-dns
	I1101 09:30:23.437920  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:23.438124  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:23.668020  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:23.675210  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:23.812023  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:23.812059  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:23.812069  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:23.812078  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:23.812086  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:23.812105  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:23.812110  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:23.812115  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:23.812125  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:23.812133  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:23.812143  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:23.812147  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:23.812154  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:23.812171  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:23.812177  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:23.812186  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:23.812193  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:23.812204  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:23.812212  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:23.812223  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:23.812245  287891 retry.go:31] will retry after 706.181805ms: missing components: kube-dns
	I1101 09:30:23.943966  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:23.945179  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:24.170451  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:24.170655  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:24.436545  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:24.436681  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:24.526040  287891 system_pods.go:86] 19 kube-system pods found
	I1101 09:30:24.526077  287891 system_pods.go:89] "coredns-66bc5c9577-4fl56" [0f936b0f-c46a-4f4c-836a-5f55dfc2dc0e] Running
	I1101 09:30:24.526090  287891 system_pods.go:89] "csi-hostpath-attacher-0" [84173609-25e4-4457-b089-2f7ee282db14] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:24.526097  287891 system_pods.go:89] "csi-hostpath-resizer-0" [917e09b3-24e7-496b-997e-bb1a8aeb1ea3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:24.526106  287891 system_pods.go:89] "csi-hostpathplugin-hc2br" [fba1a612-6236-411d-acbb-9744468acc7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:24.526111  287891 system_pods.go:89] "etcd-addons-720971" [a059f756-ce92-464c-8b31-d92c79ec7254] Running
	I1101 09:30:24.526116  287891 system_pods.go:89] "kindnet-trnz5" [7453a3d7-2d10-49f8-81f1-d109bcfb327b] Running
	I1101 09:30:24.526125  287891 system_pods.go:89] "kube-apiserver-addons-720971" [a048c94f-0a39-438a-85c9-83c8629e4c7e] Running
	I1101 09:30:24.526129  287891 system_pods.go:89] "kube-controller-manager-addons-720971" [4844f609-fd44-435a-af30-fa866c3bc453] Running
	I1101 09:30:24.526137  287891 system_pods.go:89] "kube-ingress-dns-minikube" [08819647-5e84-4317-98d5-4bbd212cf396] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:24.526149  287891 system_pods.go:89] "kube-proxy-p9fft" [c6e48d11-ecf0-4512-a6e6-b7132a745896] Running
	I1101 09:30:24.526155  287891 system_pods.go:89] "kube-scheduler-addons-720971" [a0278560-b06b-40e4-9eca-f5e76ded5ec0] Running
	I1101 09:30:24.526163  287891 system_pods.go:89] "metrics-server-85b7d694d7-pv7v7" [73797c21-58cf-472a-a533-56569b7faae5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:24.526175  287891 system_pods.go:89] "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:24.526181  287891 system_pods.go:89] "registry-6b586f9694-5d8hv" [eb89e450-0cea-4f66-9576-a21e92d593c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:24.526195  287891 system_pods.go:89] "registry-creds-764b6fb674-7sxv4" [f830ed47-72eb-4e5e-b87f-fb1b4985d259] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:24.526202  287891 system_pods.go:89] "registry-proxy-tml2d" [2bed8301-a3b1-482c-9b46-cc6149207dc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:24.526208  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dnt8c" [a597c14f-9774-4820-b32c-572195247794] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:24.526218  287891 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kph7c" [8a333117-9170-439c-87f6-f1cb398c5779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:24.526223  287891 system_pods.go:89] "storage-provisioner" [b023ad3d-dd55-45fc-b10e-5e7f916c75f4] Running
	I1101 09:30:24.526232  287891 system_pods.go:126] duration metric: took 2.156825817s to wait for k8s-apps to be running ...
	I1101 09:30:24.526245  287891 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:30:24.526302  287891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:30:24.545784  287891 system_svc.go:56] duration metric: took 19.528281ms WaitForService to wait for kubelet
	I1101 09:30:24.545813  287891 kubeadm.go:587] duration metric: took 43.264134003s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:30:24.545832  287891 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:30:24.549754  287891 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:30:24.549789  287891 node_conditions.go:123] node cpu capacity is 2
	I1101 09:30:24.549802  287891 node_conditions.go:105] duration metric: took 3.964088ms to run NodePressure ...
	I1101 09:30:24.549814  287891 start.go:242] waiting for startup goroutines ...
	I1101 09:30:24.668143  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:24.669285  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:24.940082  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:24.940379  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:25.170281  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:25.170417  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:25.438147  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:25.438749  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:25.668069  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:25.670917  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:25.937376  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:25.937753  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:26.170656  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:26.171220  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:26.435577  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:26.436388  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:26.671244  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:26.671430  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:26.938090  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:26.938690  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:27.169111  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:27.169537  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:27.435793  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:27.436316  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:27.669065  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:27.669890  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:27.935947  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:27.936155  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:28.168609  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:28.170739  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:28.436754  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:28.437195  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:28.672029  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:28.672309  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:28.936359  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:28.936545  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:29.170146  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:29.170869  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:29.435233  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:29.435749  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:29.668438  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:29.668652  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:29.936297  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:29.936490  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:30.175062  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:30.176835  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:30.436478  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:30.437157  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:30.668788  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:30.669639  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:30.935260  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:30.935476  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:31.176833  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:31.177309  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:31.442785  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:31.443906  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:31.679573  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:31.680040  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:31.936891  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:31.936951  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:32.172333  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:32.173177  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:32.441538  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:32.442102  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:32.672261  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:32.672733  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:32.940175  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:32.940579  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:33.169243  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:33.169956  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:33.439153  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:33.439399  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:33.669892  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:33.670078  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:33.935537  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:33.935852  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:34.168361  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:34.168574  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:34.436877  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:34.436995  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:34.671820  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:34.672077  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:34.936058  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:34.936222  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:35.171344  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:35.172281  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:35.436996  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:35.437202  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:35.669868  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:35.673974  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:35.936738  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:35.937176  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:36.169598  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:36.171886  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:36.435409  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:36.435875  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:36.669652  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:36.669982  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:36.936432  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:36.936831  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:37.168633  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:37.170625  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:37.436833  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:37.437249  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:37.670685  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:37.670867  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:37.938381  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:37.939471  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:38.169230  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:38.169473  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:38.437906  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:38.438896  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:38.670934  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:38.671203  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:38.935577  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:38.936978  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:39.171994  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:39.172733  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:39.436155  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:39.436616  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:39.671239  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:39.671676  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:39.851136  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:39.937471  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:39.937941  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:40.173970  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:40.174354  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:40.438078  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:40.438521  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:40.674997  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:40.675479  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:40.949659  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:40.950377  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:41.027801  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.176576583s)
	W1101 09:30:41.027901  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:41.027963  287891 retry.go:31] will retry after 32.217754575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:41.170138  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:41.170656  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:41.437630  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:41.437792  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:41.668964  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:41.671815  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:41.937080  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:41.937438  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:42.172966  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:42.173174  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:42.437460  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:42.437905  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:42.672591  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:42.673022  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:42.937137  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:42.937507  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:43.221197  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:43.227143  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:43.437058  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:43.437411  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:43.672828  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:43.673224  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:43.936204  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:43.936314  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:44.171163  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:44.171365  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:44.437661  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:44.438020  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:44.669031  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:44.672809  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:44.940924  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:44.941323  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:45.178613  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:45.180062  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:45.439550  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:45.439996  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:45.673329  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:45.673771  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:45.937058  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:45.937492  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:46.170572  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:46.170855  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:46.436870  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:46.437258  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:46.670484  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:46.670896  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:46.936132  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:46.937384  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:47.170126  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:47.170651  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:47.436012  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:47.436492  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:47.671389  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:47.671616  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:47.936169  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:47.936311  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:48.170000  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:48.170345  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:48.437016  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:48.437212  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:48.671218  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:48.671303  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:48.936611  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:48.937266  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:49.168476  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:49.171588  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:49.437508  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:49.437940  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:49.672167  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:49.672500  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:49.934434  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:49.936357  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:50.169564  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:50.169760  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:50.437829  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:50.438211  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:50.674710  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:50.677122  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:50.937751  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:50.938077  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:51.172737  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:51.173355  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:51.440196  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:51.440699  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:51.668883  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:51.669027  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:51.934912  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:51.936136  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:52.170237  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:52.170728  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:52.436878  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:52.438229  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:52.702300  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:52.712110  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:52.951189  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:52.954209  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:53.170561  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:53.170653  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:53.467814  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:53.467901  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:53.671592  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:53.671785  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:53.936313  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:53.936453  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:54.170396  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:54.170993  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:54.435955  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:54.436850  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:54.668202  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:54.671053  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:54.935933  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:54.936611  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:55.171998  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:55.177801  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:55.439018  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:55.439198  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:55.668937  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:55.670914  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:55.935977  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:55.937070  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:56.170412  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:56.170565  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:56.436219  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:56.436748  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:56.669927  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:56.671126  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:56.936626  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:56.937044  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:57.169827  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:57.180220  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:57.436844  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:57.437615  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:57.670171  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:57.670632  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:57.935027  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:57.935185  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:58.174926  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:58.175121  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:58.436880  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:58.438013  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:58.667718  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:58.669228  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:58.935879  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:58.936021  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:59.169439  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:59.169650  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:59.435350  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:59.435569  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:59.669597  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:59.677941  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:59.935192  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:59.935450  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:00.232144  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:00.232747  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:00.437079  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:00.439374  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:31:00.670253  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:00.674398  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:00.936165  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:00.936316  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:31:01.169439  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:01.170582  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:01.448491  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:31:01.448924  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:01.670827  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:01.670976  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:01.938219  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:31:01.938465  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:02.170331  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:02.170591  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:02.435523  287891 kapi.go:107] duration metric: took 1m15.003814171s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:31:02.435684  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:02.669915  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:02.674151  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:02.937097  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:03.168899  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:03.170024  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:03.435609  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:03.667349  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:03.668926  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:03.935357  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:04.169225  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:04.169451  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:04.436041  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:04.668730  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:04.670479  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:04.936182  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:05.172973  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:05.173183  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:05.436123  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:05.670514  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:05.670694  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:05.934826  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:06.170289  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:06.170698  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:06.435301  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:06.669767  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:06.669961  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:06.934850  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:07.170347  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:07.170425  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:31:07.436224  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:07.670030  287891 kapi.go:107] duration metric: took 1m17.005325249s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:31:07.670553  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:07.673434  287891 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-720971 cluster.
	I1101 09:31:07.676291  287891 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:31:07.679379  287891 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:31:07.934649  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:08.171110  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:08.435716  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:08.671018  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:08.935831  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:09.169506  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:09.436531  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:09.670084  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:09.935279  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:10.170139  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:10.435352  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:10.669602  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:10.935568  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:11.170155  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:11.434919  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:11.669132  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:11.935161  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:12.169789  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:12.435750  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:12.672217  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:12.936384  287891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:13.169562  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:13.246793  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:31:13.442447  287891 kapi.go:107] duration metric: took 1m26.010649276s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:31:13.672238  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:14.170471  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:14.645281  287891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.398452607s)
	W1101 09:31:14.645315  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:31:14.645333  287891 retry.go:31] will retry after 16.272308355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:31:14.670941  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:15.172669  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:15.668870  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:16.169633  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:16.670952  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:17.179647  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:17.674210  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:18.186648  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:18.675606  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:19.171379  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:19.670152  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:20.170547  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:20.672162  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:21.174127  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:21.671498  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:22.168709  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:22.669045  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:23.169584  287891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:23.669164  287891 kapi.go:107] duration metric: took 1m36.003483485s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:31:30.918424  287891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:31:31.794110  287891 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:31:31.794200  287891 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:31:31.797657  287891 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, default-storageclass, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1101 09:31:31.800546  287891 addons.go:515] duration metric: took 1m50.518577954s for enable addons: enabled=[registry-creds nvidia-device-plugin amd-gpu-device-plugin storage-provisioner default-storageclass cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1101 09:31:31.800593  287891 start.go:247] waiting for cluster config update ...
	I1101 09:31:31.800615  287891 start.go:256] writing updated cluster config ...
	I1101 09:31:31.800898  287891 ssh_runner.go:195] Run: rm -f paused
	I1101 09:31:31.804477  287891 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:31:31.808221  287891 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4fl56" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.812585  287891 pod_ready.go:94] pod "coredns-66bc5c9577-4fl56" is "Ready"
	I1101 09:31:31.812611  287891 pod_ready.go:86] duration metric: took 4.364148ms for pod "coredns-66bc5c9577-4fl56" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.814845  287891 pod_ready.go:83] waiting for pod "etcd-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.819242  287891 pod_ready.go:94] pod "etcd-addons-720971" is "Ready"
	I1101 09:31:31.819269  287891 pod_ready.go:86] duration metric: took 4.362761ms for pod "etcd-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.821678  287891 pod_ready.go:83] waiting for pod "kube-apiserver-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.826310  287891 pod_ready.go:94] pod "kube-apiserver-addons-720971" is "Ready"
	I1101 09:31:31.826375  287891 pod_ready.go:86] duration metric: took 4.591903ms for pod "kube-apiserver-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:31.828671  287891 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:32.208245  287891 pod_ready.go:94] pod "kube-controller-manager-addons-720971" is "Ready"
	I1101 09:31:32.208312  287891 pod_ready.go:86] duration metric: took 379.616372ms for pod "kube-controller-manager-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:32.408477  287891 pod_ready.go:83] waiting for pod "kube-proxy-p9fft" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:32.808783  287891 pod_ready.go:94] pod "kube-proxy-p9fft" is "Ready"
	I1101 09:31:32.808812  287891 pod_ready.go:86] duration metric: took 400.266182ms for pod "kube-proxy-p9fft" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:33.009433  287891 pod_ready.go:83] waiting for pod "kube-scheduler-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:33.408678  287891 pod_ready.go:94] pod "kube-scheduler-addons-720971" is "Ready"
	I1101 09:31:33.408710  287891 pod_ready.go:86] duration metric: took 399.250289ms for pod "kube-scheduler-addons-720971" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:33.408723  287891 pod_ready.go:40] duration metric: took 1.604217801s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:31:33.466762  287891 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:31:33.470270  287891 out.go:179] * Done! kubectl is now configured to use "addons-720971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.258624732Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.523462261Z" level=info msg="Removing container: 62318c77c543c15ab0c3f838f2fca268885480919643cffaefce48516691316d" id=add2963b-f6a1-44d7-99f6-8d99158d1cb1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.525767046Z" level=info msg="Error loading conmon cgroup of container 62318c77c543c15ab0c3f838f2fca268885480919643cffaefce48516691316d: cgroup deleted" id=add2963b-f6a1-44d7-99f6-8d99158d1cb1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.537420324Z" level=info msg="Removed container 62318c77c543c15ab0c3f838f2fca268885480919643cffaefce48516691316d: gcp-auth/gcp-auth-certs-patch-6wm9s/patch" id=add2963b-f6a1-44d7-99f6-8d99158d1cb1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.539384391Z" level=info msg="Removing container: dfb31344b5ce4c05b79dba298738ed8c98fe805d1e943b90da49d4fc8e81f097" id=ff37b9c7-7ecf-4305-b6b2-a05412fa3350 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.542274297Z" level=info msg="Error loading conmon cgroup of container dfb31344b5ce4c05b79dba298738ed8c98fe805d1e943b90da49d4fc8e81f097: cgroup deleted" id=ff37b9c7-7ecf-4305-b6b2-a05412fa3350 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.550133583Z" level=info msg="Removed container dfb31344b5ce4c05b79dba298738ed8c98fe805d1e943b90da49d4fc8e81f097: gcp-auth/gcp-auth-certs-create-x5xc8/create" id=ff37b9c7-7ecf-4305-b6b2-a05412fa3350 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.552927585Z" level=info msg="Stopping pod sandbox: 41894a0144451d176f2c8c7d06ce4ddfc0d8f6747f76df7ba5b21747c149c34d" id=04bf7e91-bb86-4845-85e0-c6c2c7b92bc1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.552987566Z" level=info msg="Stopped pod sandbox (already stopped): 41894a0144451d176f2c8c7d06ce4ddfc0d8f6747f76df7ba5b21747c149c34d" id=04bf7e91-bb86-4845-85e0-c6c2c7b92bc1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.553451093Z" level=info msg="Removing pod sandbox: 41894a0144451d176f2c8c7d06ce4ddfc0d8f6747f76df7ba5b21747c149c34d" id=1168ed93-5494-426d-91db-a0377137ad5f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.559442938Z" level=info msg="Removed pod sandbox: 41894a0144451d176f2c8c7d06ce4ddfc0d8f6747f76df7ba5b21747c149c34d" id=1168ed93-5494-426d-91db-a0377137ad5f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.560263905Z" level=info msg="Stopping pod sandbox: ff744c58bcf77bfc5e2c2ed83ead7552199ec53850e0c8155d1f91b6f1cc2c3c" id=721b4d9b-50cd-4f22-9780-412d64481250 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.560414358Z" level=info msg="Stopped pod sandbox (already stopped): ff744c58bcf77bfc5e2c2ed83ead7552199ec53850e0c8155d1f91b6f1cc2c3c" id=721b4d9b-50cd-4f22-9780-412d64481250 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.560745286Z" level=info msg="Removing pod sandbox: ff744c58bcf77bfc5e2c2ed83ead7552199ec53850e0c8155d1f91b6f1cc2c3c" id=3564bdde-a5d8-422b-a8e1-92bc0489d5b6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:31:35 addons-720971 crio[832]: time="2025-11-01T09:31:35.565320352Z" level=info msg="Removed pod sandbox: ff744c58bcf77bfc5e2c2ed83ead7552199ec53850e0c8155d1f91b6f1cc2c3c" id=3564bdde-a5d8-422b-a8e1-92bc0489d5b6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:31:37 addons-720971 crio[832]: time="2025-11-01T09:31:37.145640825Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=62726996-44aa-466b-9dfb-394c3ed667d4 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:31:37 addons-720971 crio[832]: time="2025-11-01T09:31:37.146618727Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aa427217-2852-467b-a94f-287537045035 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:37 addons-720971 crio[832]: time="2025-11-01T09:31:37.154240986Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e5cb116d-c0c0-45e2-9883-03c000918bc6 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:37 addons-720971 crio[832]: time="2025-11-01T09:31:37.162745382Z" level=info msg="Creating container: default/busybox/busybox" id=3fafbe60-9afb-494d-bac0-0b528788616b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:37 addons-720971 crio[832]: time="2025-11-01T09:31:37.162888819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:37 addons-720971 crio[832]: time="2025-11-01T09:31:37.169808144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:37 addons-720971 crio[832]: time="2025-11-01T09:31:37.170481391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:37 addons-720971 crio[832]: time="2025-11-01T09:31:37.189473606Z" level=info msg="Created container c8492900ba1f0e7da607bed78925079aa9b885c6efc1d794b05b1f52979cf926: default/busybox/busybox" id=3fafbe60-9afb-494d-bac0-0b528788616b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:37 addons-720971 crio[832]: time="2025-11-01T09:31:37.192080757Z" level=info msg="Starting container: c8492900ba1f0e7da607bed78925079aa9b885c6efc1d794b05b1f52979cf926" id=4f712615-80db-4993-8ee8-0f34563f50e7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:31:37 addons-720971 crio[832]: time="2025-11-01T09:31:37.195673702Z" level=info msg="Started container" PID=5050 containerID=c8492900ba1f0e7da607bed78925079aa9b885c6efc1d794b05b1f52979cf926 description=default/busybox/busybox id=4f712615-80db-4993-8ee8-0f34563f50e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=906f95fb04e6dfa5d8f6f56e9002f02a4fe843f0f2e3ea2f8ce23f9e15b2b04e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	c8492900ba1f0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   906f95fb04e6d       busybox                                     default
	303b571899533       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          23 seconds ago       Running             csi-snapshotter                          0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	e66b9ccb0c01f       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          25 seconds ago       Running             csi-provisioner                          0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	6cf6775444e13       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            26 seconds ago       Running             liveness-probe                           0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	3f38970b15f05       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           27 seconds ago       Running             hostpath                                 0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	c15dba784eeb1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            29 seconds ago       Running             gadget                                   0                   56bd7113acce1       gadget-f6mdx                                gadget
	2f69e6ade4240       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             33 seconds ago       Running             controller                               0                   10f95ef6363d1       ingress-nginx-controller-675c5ddd98-gkdm4   ingress-nginx
	39c463f92bb15       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 39 seconds ago       Running             gcp-auth                                 0                   9f02e867cc79b       gcp-auth-78565c9fb4-plnxs                   gcp-auth
	43580d85746e5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                42 seconds ago       Running             node-driver-registrar                    0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	8fe3992cfeef6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              43 seconds ago       Running             registry-proxy                           0                   7c680bb1adf7a       registry-proxy-tml2d                        kube-system
	d4f55b3c93144       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             48 seconds ago       Running             csi-attacher                             0                   a818dc3c8bcaf       csi-hostpath-attacher-0                     kube-system
	cee7ed9ce1f56       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              49 seconds ago       Running             csi-resizer                              0                   2bbf2b4a592bc       csi-hostpath-resizer-0                      kube-system
	663937f8140cd       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               51 seconds ago       Running             cloud-spanner-emulator                   0                   fe385ca222b54       cloud-spanner-emulator-86bd5cbb97-n8sf9     default
	a4e79c5cf7b96       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               56 seconds ago       Running             minikube-ingress-dns                     0                   a015779c2df17       kube-ingress-dns-minikube                   kube-system
	64a188cb4e7e1       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   0838e59fae3ce       yakd-dashboard-5ff678cb9-p9f57              yakd-dashboard
	86e9c5d9f6cea       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   54847aaeb8447       nvidia-device-plugin-daemonset-6xjv5        kube-system
	0e99ffc2f9984       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   33b32ba722b74       local-path-provisioner-648f6765c9-pxbsb     local-path-storage
	2ee6be51ad680       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              patch                                    0                   24ec6bbbb59ad       ingress-nginx-admission-patch-7jj6d         ingress-nginx
	b30f47b175d57       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   9d1c00aaf96e2       snapshot-controller-7d9fbc56b8-dnt8c        kube-system
	203e43681277e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   0c33bd81780b9       ingress-nginx-admission-create-4f8fn        ingress-nginx
	e02cb9b41b9b1       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   7f3e61b014c7c       metrics-server-85b7d694d7-pv7v7             kube-system
	8e4b16182fc32       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   3e9976a06df86       snapshot-controller-7d9fbc56b8-kph7c        kube-system
	012c36c742b1d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   8dc8b099b973f       csi-hostpathplugin-hc2br                    kube-system
	c87eccd73057d       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   8f40ee15d32fd       registry-6b586f9694-5d8hv                   kube-system
	b28d2db9811d7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   4d214b5c6dde6       coredns-66bc5c9577-4fl56                    kube-system
	1aab4e12b2651       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   41c3e7de2a6c8       storage-provisioner                         kube-system
	fd15c88e36dcc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   e1d13dc9cbe2f       kindnet-trnz5                               kube-system
	5d768341f5651       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   4cec8d85d37a0       kube-proxy-p9fft                            kube-system
	243fa64c16788       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   6f3f729a2be24       kube-controller-manager-addons-720971       kube-system
	4ab2a5f98b253       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   3c308651bb70f       etcd-addons-720971                          kube-system
	f1c57c321c093       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   6cf05cf176b49       kube-scheduler-addons-720971                kube-system
	74a9b3705b5e1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   518c87b10b31f       kube-apiserver-addons-720971                kube-system
	
	
	==> coredns [b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0] <==
	[INFO] 10.244.0.18:60240 - 56259 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000106448s
	[INFO] 10.244.0.18:60240 - 3131 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002514638s
	[INFO] 10.244.0.18:60240 - 8823 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002536505s
	[INFO] 10.244.0.18:60240 - 227 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000169974s
	[INFO] 10.244.0.18:60240 - 17874 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00010351s
	[INFO] 10.244.0.18:49149 - 57706 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160816s
	[INFO] 10.244.0.18:49149 - 57493 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00022298s
	[INFO] 10.244.0.18:53642 - 16818 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013227s
	[INFO] 10.244.0.18:53642 - 16638 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079083s
	[INFO] 10.244.0.18:42910 - 33333 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087444s
	[INFO] 10.244.0.18:42910 - 33153 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007155s
	[INFO] 10.244.0.18:55701 - 50758 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001336583s
	[INFO] 10.244.0.18:55701 - 50536 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001315823s
	[INFO] 10.244.0.18:58373 - 43896 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122111s
	[INFO] 10.244.0.18:58373 - 43740 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000162335s
	[INFO] 10.244.0.19:40352 - 39291 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000175676s
	[INFO] 10.244.0.19:52901 - 36148 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000202417s
	[INFO] 10.244.0.19:41757 - 1701 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000213789s
	[INFO] 10.244.0.19:52697 - 56904 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000165337s
	[INFO] 10.244.0.19:59663 - 23538 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141846s
	[INFO] 10.244.0.19:43498 - 59881 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009725s
	[INFO] 10.244.0.19:47490 - 42253 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002550338s
	[INFO] 10.244.0.19:36679 - 56440 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001803802s
	[INFO] 10.244.0.19:42030 - 9245 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003643977s
	[INFO] 10.244.0.19:60706 - 46114 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003180188s
	
	
	==> describe nodes <==
	Name:               addons-720971
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-720971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=addons-720971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_29_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-720971
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-720971"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:29:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-720971
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:31:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:31:38 +0000   Sat, 01 Nov 2025 09:29:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:31:38 +0000   Sat, 01 Nov 2025 09:29:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:31:38 +0000   Sat, 01 Nov 2025 09:29:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:31:38 +0000   Sat, 01 Nov 2025 09:30:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-720971
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                83b5d1ed-3170-4ffb-be3a-c9b9b98815af
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-86bd5cbb97-n8sf9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  gadget                      gadget-f6mdx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  gcp-auth                    gcp-auth-78565c9fb4-plnxs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-gkdm4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         119s
	  kube-system                 coredns-66bc5c9577-4fl56                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m5s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 csi-hostpathplugin-hc2br                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 etcd-addons-720971                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m11s
	  kube-system                 kindnet-trnz5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m6s
	  kube-system                 kube-apiserver-addons-720971                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-addons-720971        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-p9fft                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-scheduler-addons-720971                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 metrics-server-85b7d694d7-pv7v7              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m
	  kube-system                 nvidia-device-plugin-daemonset-6xjv5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 registry-6b586f9694-5d8hv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 registry-creds-764b6fb674-7sxv4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 registry-proxy-tml2d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 snapshot-controller-7d9fbc56b8-dnt8c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 snapshot-controller-7d9fbc56b8-kph7c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  local-path-storage          local-path-provisioner-648f6765c9-pxbsb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-p9f57               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m4s                   kube-proxy       
	  Normal   Starting                 2m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m18s (x8 over 2m18s)  kubelet          Node addons-720971 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m18s (x8 over 2m18s)  kubelet          Node addons-720971 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m18s (x8 over 2m18s)  kubelet          Node addons-720971 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s                  kubelet          Node addons-720971 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s                  kubelet          Node addons-720971 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s                  kubelet          Node addons-720971 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m7s                   node-controller  Node addons-720971 event: Registered Node addons-720971 in Controller
	  Normal   NodeReady                85s                    kubelet          Node addons-720971 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014572] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501039] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033197] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753566] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.779214] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:03] hrtimer: interrupt took 8309137 ns
	[Nov 1 09:28] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[  +0.061702] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8] <==
	{"level":"warn","ts":"2025-11-01T09:29:31.472290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.501796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.532982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.577643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.598517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.648203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.670585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.701868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.731727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.751259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.773760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.814460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.833642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.868604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.897747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.931014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.961506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:31.981851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:32.142659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:48.125786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:48.141820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:09.865308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:09.887251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:09.933742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:09.947993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36252","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [39c463f92bb152c7e8a166839eda7f4aadd487376b16e33629e6bc53f8bd719e] <==
	2025/11/01 09:31:06 GCP Auth Webhook started!
	2025/11/01 09:31:34 Ready to marshal response ...
	2025/11/01 09:31:34 Ready to write response ...
	2025/11/01 09:31:34 Ready to marshal response ...
	2025/11/01 09:31:34 Ready to write response ...
	2025/11/01 09:31:34 Ready to marshal response ...
	2025/11/01 09:31:34 Ready to write response ...
	
	
	==> kernel <==
	 09:31:46 up  1:14,  0 user,  load average: 1.95, 2.49, 3.11
	Linux addons-720971 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad] <==
	E1101 09:30:11.444714       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:30:11.444825       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:30:11.446053       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:30:11.446188       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 09:30:12.545718       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:30:12.545750       1 metrics.go:72] Registering metrics
	I1101 09:30:12.545801       1 controller.go:711] "Syncing nftables rules"
	I1101 09:30:21.451864       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:30:21.451917       1 main.go:301] handling current node
	I1101 09:30:31.444089       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:30:31.444131       1 main.go:301] handling current node
	I1101 09:30:41.443974       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:30:41.444048       1 main.go:301] handling current node
	I1101 09:30:51.444682       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:30:51.444725       1 main.go:301] handling current node
	I1101 09:31:01.445815       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:31:01.445861       1 main.go:301] handling current node
	I1101 09:31:11.445788       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:31:11.445831       1 main.go:301] handling current node
	I1101 09:31:21.443628       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:31:21.443666       1 main.go:301] handling current node
	I1101 09:31:31.444647       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:31:31.444697       1 main.go:301] handling current node
	I1101 09:31:41.444664       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:31:41.444779       1 main.go:301] handling current node
	
	
	==> kube-apiserver [74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea] <==
	W1101 09:29:48.140206       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1101 09:29:50.516503       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.201.89"}
	W1101 09:30:09.865055       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:30:09.887012       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:30:09.923097       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:30:09.946849       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:30:22.017039       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.201.89:443: connect: connection refused
	E1101 09:30:22.017161       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.201.89:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:22.017180       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.201.89:443: connect: connection refused
	E1101 09:30:22.017905       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.201.89:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:22.099229       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.201.89:443: connect: connection refused
	E1101 09:30:22.099271       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.201.89:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:43.090857       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.120.50:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:43.091246       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 09:30:43.091306       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:30:43.092219       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.120.50:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:43.097066       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.120.50:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:43.118335       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.120.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.120.50:443: connect: connection refused" logger="UnhandledError"
	I1101 09:30:43.277179       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:31:43.849433       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47402: use of closed network connection
	E1101 09:31:44.086634       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47410: use of closed network connection
	E1101 09:31:44.228059       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47430: use of closed network connection
	
	
	==> kube-controller-manager [243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d] <==
	I1101 09:29:39.884282       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:29:39.885278       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:29:39.885539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:29:39.885592       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:29:39.886519       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:29:39.886560       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:29:39.886552       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:29:39.886649       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:29:39.886690       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:29:39.886755       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:29:39.886539       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:29:39.889141       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:29:39.889129       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:29:39.891510       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	E1101 09:29:46.126832       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1101 09:30:09.857055       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:30:09.857230       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 09:30:09.857290       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:30:09.888702       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:30:09.894568       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:30:09.960342       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:30:09.995816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:30:24.856401       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1101 09:30:39.965629       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:30:40.005811       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147] <==
	I1101 09:29:41.234359       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:29:41.335258       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:29:41.440188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:29:41.440223       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:29:41.440299       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:29:41.513178       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:29:41.513304       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:29:41.536328       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:29:41.536727       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:29:41.536951       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:29:41.542191       1 config.go:200] "Starting service config controller"
	I1101 09:29:41.542275       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:29:41.542320       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:29:41.542368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:29:41.542405       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:29:41.542439       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:29:41.543190       1 config.go:309] "Starting node config controller"
	I1101 09:29:41.543328       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:29:41.543371       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:29:41.643088       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:29:41.643222       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:29:41.643241       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440] <==
	I1101 09:29:34.084755       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:29:34.092349       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:29:34.092707       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:29:34.092740       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:29:34.092762       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 09:29:34.103067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:29:34.103227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:29:34.103330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:29:34.110119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:29:34.110724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:29:34.110867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:29:34.111072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:29:34.111173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:29:34.111272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:29:34.111358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:29:34.111451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:29:34.111540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:29:34.111628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:29:34.111733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:29:34.111876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:29:34.111987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:29:34.112066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:29:34.112586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:29:34.112697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1101 09:29:35.193191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:31:00 addons-720971 kubelet[1270]: I1101 09:31:00.479039    1270 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjcs2\" (UniqueName: \"kubernetes.io/projected/d7da61a6-f908-4576-8b5c-4816b57affa6-kube-api-access-jjcs2\") pod \"d7da61a6-f908-4576-8b5c-4816b57affa6\" (UID: \"d7da61a6-f908-4576-8b5c-4816b57affa6\") "
	Nov 01 09:31:00 addons-720971 kubelet[1270]: I1101 09:31:00.482217    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7da61a6-f908-4576-8b5c-4816b57affa6-kube-api-access-jjcs2" (OuterVolumeSpecName: "kube-api-access-jjcs2") pod "d7da61a6-f908-4576-8b5c-4816b57affa6" (UID: "d7da61a6-f908-4576-8b5c-4816b57affa6"). InnerVolumeSpecName "kube-api-access-jjcs2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 09:31:00 addons-720971 kubelet[1270]: I1101 09:31:00.579952    1270 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jjcs2\" (UniqueName: \"kubernetes.io/projected/d7da61a6-f908-4576-8b5c-4816b57affa6-kube-api-access-jjcs2\") on node \"addons-720971\" DevicePath \"\""
	Nov 01 09:31:01 addons-720971 kubelet[1270]: I1101 09:31:01.216826    1270 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41894a0144451d176f2c8c7d06ce4ddfc0d8f6747f76df7ba5b21747c149c34d"
	Nov 01 09:31:01 addons-720971 kubelet[1270]: I1101 09:31:01.627071    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d838a23-b73b-4934-b5d5-88931e7cd745" path="/var/lib/kubelet/pods/3d838a23-b73b-4934-b5d5-88931e7cd745/volumes"
	Nov 01 09:31:02 addons-720971 kubelet[1270]: I1101 09:31:02.222865    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tml2d" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:31:02 addons-720971 kubelet[1270]: I1101 09:31:02.240547    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-tml2d" podStartSLOduration=1.523506601 podStartE2EDuration="40.240526469s" podCreationTimestamp="2025-11-01 09:30:22 +0000 UTC" firstStartedPulling="2025-11-01 09:30:23.405962903 +0000 UTC m=+48.012938116" lastFinishedPulling="2025-11-01 09:31:02.122982771 +0000 UTC m=+86.729957984" observedRunningTime="2025-11-01 09:31:02.239736452 +0000 UTC m=+86.846711665" watchObservedRunningTime="2025-11-01 09:31:02.240526469 +0000 UTC m=+86.847501682"
	Nov 01 09:31:03 addons-720971 kubelet[1270]: I1101 09:31:03.226370    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tml2d" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:31:07 addons-720971 kubelet[1270]: I1101 09:31:07.266442    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-plnxs" podStartSLOduration=41.009678912 podStartE2EDuration="1m17.26642199s" podCreationTimestamp="2025-11-01 09:29:50 +0000 UTC" firstStartedPulling="2025-11-01 09:30:30.245653176 +0000 UTC m=+54.852628389" lastFinishedPulling="2025-11-01 09:31:06.502396164 +0000 UTC m=+91.109371467" observedRunningTime="2025-11-01 09:31:07.265036414 +0000 UTC m=+91.872011635" watchObservedRunningTime="2025-11-01 09:31:07.26642199 +0000 UTC m=+91.873397219"
	Nov 01 09:31:17 addons-720971 kubelet[1270]: I1101 09:31:17.314555    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-gkdm4" podStartSLOduration=55.955301389 podStartE2EDuration="1m30.314448754s" podCreationTimestamp="2025-11-01 09:29:47 +0000 UTC" firstStartedPulling="2025-11-01 09:30:38.088955419 +0000 UTC m=+62.695930632" lastFinishedPulling="2025-11-01 09:31:12.448102784 +0000 UTC m=+97.055077997" observedRunningTime="2025-11-01 09:31:13.300911626 +0000 UTC m=+97.907886880" watchObservedRunningTime="2025-11-01 09:31:17.314448754 +0000 UTC m=+101.921423967"
	Nov 01 09:31:19 addons-720971 kubelet[1270]: I1101 09:31:19.741437    1270 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 01 09:31:19 addons-720971 kubelet[1270]: I1101 09:31:19.741497    1270 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 01 09:31:20 addons-720971 kubelet[1270]: I1101 09:31:20.835475    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-f6mdx" podStartSLOduration=68.958071512 podStartE2EDuration="1m34.835459511s" podCreationTimestamp="2025-11-01 09:29:46 +0000 UTC" firstStartedPulling="2025-11-01 09:30:50.663648231 +0000 UTC m=+75.270623452" lastFinishedPulling="2025-11-01 09:31:16.541036239 +0000 UTC m=+101.148011451" observedRunningTime="2025-11-01 09:31:17.324888667 +0000 UTC m=+101.931863880" watchObservedRunningTime="2025-11-01 09:31:20.835459511 +0000 UTC m=+105.442434724"
	Nov 01 09:31:23 addons-720971 kubelet[1270]: I1101 09:31:23.349764    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-hc2br" podStartSLOduration=1.421804558 podStartE2EDuration="1m1.34974475s" podCreationTimestamp="2025-11-01 09:30:22 +0000 UTC" firstStartedPulling="2025-11-01 09:30:22.48518235 +0000 UTC m=+47.092157563" lastFinishedPulling="2025-11-01 09:31:22.413122534 +0000 UTC m=+107.020097755" observedRunningTime="2025-11-01 09:31:23.347644622 +0000 UTC m=+107.954619843" watchObservedRunningTime="2025-11-01 09:31:23.34974475 +0000 UTC m=+107.956719971"
	Nov 01 09:31:26 addons-720971 kubelet[1270]: E1101 09:31:26.027881    1270 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 01 09:31:26 addons-720971 kubelet[1270]: E1101 09:31:26.027977    1270 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f830ed47-72eb-4e5e-b87f-fb1b4985d259-gcr-creds podName:f830ed47-72eb-4e5e-b87f-fb1b4985d259 nodeName:}" failed. No retries permitted until 2025-11-01 09:32:30.027958605 +0000 UTC m=+174.634933826 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/f830ed47-72eb-4e5e-b87f-fb1b4985d259-gcr-creds") pod "registry-creds-764b6fb674-7sxv4" (UID: "f830ed47-72eb-4e5e-b87f-fb1b4985d259") : secret "registry-creds-gcr" not found
	Nov 01 09:31:31 addons-720971 kubelet[1270]: I1101 09:31:31.628279    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7da61a6-f908-4576-8b5c-4816b57affa6" path="/var/lib/kubelet/pods/d7da61a6-f908-4576-8b5c-4816b57affa6/volumes"
	Nov 01 09:31:34 addons-720971 kubelet[1270]: I1101 09:31:34.802309    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f9c19b18-e0d8-4eae-887d-9c6a70258ee3-gcp-creds\") pod \"busybox\" (UID: \"f9c19b18-e0d8-4eae-887d-9c6a70258ee3\") " pod="default/busybox"
	Nov 01 09:31:34 addons-720971 kubelet[1270]: I1101 09:31:34.802366    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brgv9\" (UniqueName: \"kubernetes.io/projected/f9c19b18-e0d8-4eae-887d-9c6a70258ee3-kube-api-access-brgv9\") pod \"busybox\" (UID: \"f9c19b18-e0d8-4eae-887d-9c6a70258ee3\") " pod="default/busybox"
	Nov 01 09:31:35 addons-720971 kubelet[1270]: W1101 09:31:35.252295    1270 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/490d904a357f641cc908fbd95170db9da44a0f8e618547cbbe2c646bd495a897/crio-906f95fb04e6dfa5d8f6f56e9002f02a4fe843f0f2e3ea2f8ce23f9e15b2b04e WatchSource:0}: Error finding container 906f95fb04e6dfa5d8f6f56e9002f02a4fe843f0f2e3ea2f8ce23f9e15b2b04e: Status 404 returned error can't find the container with id 906f95fb04e6dfa5d8f6f56e9002f02a4fe843f0f2e3ea2f8ce23f9e15b2b04e
	Nov 01 09:31:35 addons-720971 kubelet[1270]: I1101 09:31:35.522022    1270 scope.go:117] "RemoveContainer" containerID="62318c77c543c15ab0c3f838f2fca268885480919643cffaefce48516691316d"
	Nov 01 09:31:35 addons-720971 kubelet[1270]: I1101 09:31:35.537916    1270 scope.go:117] "RemoveContainer" containerID="dfb31344b5ce4c05b79dba298738ed8c98fe805d1e943b90da49d4fc8e81f097"
	Nov 01 09:31:37 addons-720971 kubelet[1270]: I1101 09:31:37.389118    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.4981501640000001 podStartE2EDuration="3.38910125s" podCreationTimestamp="2025-11-01 09:31:34 +0000 UTC" firstStartedPulling="2025-11-01 09:31:35.25648357 +0000 UTC m=+119.863458783" lastFinishedPulling="2025-11-01 09:31:37.147434656 +0000 UTC m=+121.754409869" observedRunningTime="2025-11-01 09:31:37.388336383 +0000 UTC m=+121.995311604" watchObservedRunningTime="2025-11-01 09:31:37.38910125 +0000 UTC m=+121.996076471"
	Nov 01 09:31:39 addons-720971 kubelet[1270]: I1101 09:31:39.624735    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-5d8hv" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:31:44 addons-720971 kubelet[1270]: E1101 09:31:44.090476    1270 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39094->127.0.0.1:45281: write tcp 127.0.0.1:39094->127.0.0.1:45281: write: broken pipe
	
	
	==> storage-provisioner [1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1] <==
	W1101 09:31:21.525251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:23.530011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:23.540667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:25.543943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:25.548430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:27.551023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:27.558145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:29.561722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:29.566501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:31.570185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:31.575267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:33.579069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:33.591973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:35.595056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:35.599239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:37.602398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:37.609246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:39.612974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:39.617558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:41.620959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:41.627598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:43.631282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:43.636055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:45.639613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:45.647003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-720971 -n addons-720971
helpers_test.go:269: (dbg) Run:  kubectl --context addons-720971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-4f8fn ingress-nginx-admission-patch-7jj6d registry-creds-764b6fb674-7sxv4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-720971 describe pod ingress-nginx-admission-create-4f8fn ingress-nginx-admission-patch-7jj6d registry-creds-764b6fb674-7sxv4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-720971 describe pod ingress-nginx-admission-create-4f8fn ingress-nginx-admission-patch-7jj6d registry-creds-764b6fb674-7sxv4: exit status 1 (83.447048ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4f8fn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7jj6d" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-7sxv4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-720971 describe pod ingress-nginx-admission-create-4f8fn ingress-nginx-admission-patch-7jj6d registry-creds-764b6fb674-7sxv4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable headlamp --alsologtostderr -v=1: exit status 11 (264.188782ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:47.652536  294566 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:47.653375  294566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:47.653539  294566 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:47.653560  294566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:47.653909  294566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:31:47.654293  294566 mustload.go:66] Loading cluster: addons-720971
	I1101 09:31:47.654799  294566 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:47.654834  294566 addons.go:607] checking whether the cluster is paused
	I1101 09:31:47.654996  294566 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:47.655037  294566 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:31:47.655980  294566 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:31:47.673604  294566 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:47.673661  294566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:31:47.691631  294566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:31:47.796248  294566 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:47.796337  294566 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:47.827545  294566 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:31:47.827566  294566 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:31:47.827571  294566 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:31:47.827575  294566 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:31:47.827578  294566 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:31:47.827583  294566 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:31:47.827588  294566 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:31:47.827600  294566 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:31:47.827605  294566 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:31:47.827612  294566 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:31:47.827620  294566 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:31:47.827623  294566 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:31:47.827627  294566 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:31:47.827630  294566 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:31:47.827633  294566 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:31:47.827639  294566 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:31:47.827644  294566 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:31:47.827648  294566 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:31:47.827652  294566 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:31:47.827655  294566 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:31:47.827659  294566 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:31:47.827673  294566 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:31:47.827676  294566 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:31:47.827679  294566 cri.go:89] found id: ""
	I1101 09:31:47.827732  294566 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:47.843316  294566 out.go:203] 
	W1101 09:31:47.846080  294566 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:47.846103  294566 out.go:285] * 
	* 
	W1101 09:31:47.852711  294566 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:47.855645  294566 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-n8sf9" [74c95d4c-b505-4748-8f75-96a329fdc73b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007514851s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (329.551379ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:33:00.521485  296459 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:33:00.523093  296459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:00.523125  296459 out.go:374] Setting ErrFile to fd 2...
	I1101 09:33:00.523132  296459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:00.523741  296459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:33:00.524278  296459 mustload.go:66] Loading cluster: addons-720971
	I1101 09:33:00.524685  296459 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:00.524704  296459 addons.go:607] checking whether the cluster is paused
	I1101 09:33:00.524809  296459 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:00.524824  296459 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:33:00.525465  296459 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:33:00.549142  296459 ssh_runner.go:195] Run: systemctl --version
	I1101 09:33:00.549213  296459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:33:00.570998  296459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:33:00.677317  296459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:33:00.677411  296459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:33:00.717271  296459 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:33:00.717296  296459 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:33:00.717304  296459 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:33:00.717308  296459 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:33:00.717312  296459 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:33:00.717317  296459 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:33:00.717338  296459 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:33:00.717345  296459 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:33:00.717348  296459 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:33:00.717354  296459 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:33:00.717360  296459 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:33:00.717364  296459 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:33:00.717367  296459 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:33:00.717371  296459 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:33:00.717374  296459 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:33:00.717379  296459 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:33:00.717385  296459 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:33:00.717394  296459 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:33:00.717397  296459 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:33:00.717401  296459 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:33:00.717406  296459 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:33:00.717412  296459 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:33:00.717415  296459 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:33:00.717418  296459 cri.go:89] found id: ""
	I1101 09:33:00.717471  296459 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:33:00.734953  296459 out.go:203] 
	W1101 09:33:00.738558  296459 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:33:00.738591  296459 out.go:285] * 
	* 
	W1101 09:33:00.745098  296459 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:33:00.748034  296459 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.35s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-720971 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-720971 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-720971 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b9e58991-c00e-423e-9031-f4e8edeb3b4c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b9e58991-c00e-423e-9031-f4e8edeb3b4c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b9e58991-c00e-423e-9031-f4e8edeb3b4c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003417837s
addons_test.go:967: (dbg) Run:  kubectl --context addons-720971 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 ssh "cat /opt/local-path-provisioner/pvc-13036f40-77fc-479b-8d89-adac40366789_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-720971 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-720971 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (290.060765ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:32:54.171578  296352 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:32:54.172694  296352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:54.172741  296352 out.go:374] Setting ErrFile to fd 2...
	I1101 09:32:54.172762  296352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:54.173055  296352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:32:54.173442  296352 mustload.go:66] Loading cluster: addons-720971
	I1101 09:32:54.173925  296352 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:54.173975  296352 addons.go:607] checking whether the cluster is paused
	I1101 09:32:54.174118  296352 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:54.174156  296352 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:32:54.174655  296352 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:32:54.194947  296352 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:54.195009  296352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:32:54.219943  296352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:32:54.328466  296352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:54.328547  296352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:54.358875  296352 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:32:54.358897  296352 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:32:54.358903  296352 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:32:54.358907  296352 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:32:54.358910  296352 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:32:54.358914  296352 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:32:54.358917  296352 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:32:54.358920  296352 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:32:54.358923  296352 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:32:54.358929  296352 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:32:54.358933  296352 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:32:54.358936  296352 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:32:54.358939  296352 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:32:54.358942  296352 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:32:54.358945  296352 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:32:54.358950  296352 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:32:54.358958  296352 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:32:54.358966  296352 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:32:54.358970  296352 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:32:54.358973  296352 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:32:54.358977  296352 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:32:54.358980  296352 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:32:54.358983  296352 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:32:54.358987  296352 cri.go:89] found id: ""
	I1101 09:32:54.359044  296352 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:32:54.385485  296352 out.go:203] 
	W1101 09:32:54.389036  296352 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:32:54.389085  296352 out.go:285] * 
	* 
	W1101 09:32:54.395520  296352 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:32:54.399735  296352 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-6xjv5" [aa68419c-893b-43e0-9bb6-e81c2a645e34] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003012025s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (267.618666ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:32:40.433761  295979 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:32:40.434484  295979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:40.434503  295979 out.go:374] Setting ErrFile to fd 2...
	I1101 09:32:40.434510  295979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:40.434784  295979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:32:40.435097  295979 mustload.go:66] Loading cluster: addons-720971
	I1101 09:32:40.435633  295979 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:40.435671  295979 addons.go:607] checking whether the cluster is paused
	I1101 09:32:40.435825  295979 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:40.435845  295979 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:32:40.436415  295979 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:32:40.456202  295979 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:40.456266  295979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:32:40.477119  295979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:32:40.581903  295979 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:40.582027  295979 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:40.614773  295979 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:32:40.614815  295979 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:32:40.614822  295979 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:32:40.614827  295979 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:32:40.614830  295979 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:32:40.614834  295979 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:32:40.614837  295979 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:32:40.614841  295979 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:32:40.614844  295979 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:32:40.614850  295979 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:32:40.614853  295979 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:32:40.614857  295979 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:32:40.614860  295979 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:32:40.614864  295979 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:32:40.614867  295979 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:32:40.614872  295979 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:32:40.614876  295979 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:32:40.614880  295979 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:32:40.614883  295979 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:32:40.614886  295979 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:32:40.614890  295979 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:32:40.614896  295979 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:32:40.614900  295979 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:32:40.614902  295979 cri.go:89] found id: ""
	I1101 09:32:40.614956  295979 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:32:40.630309  295979 out.go:203] 
	W1101 09:32:40.633158  295979 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:32:40.633181  295979 out.go:285] * 
	* 
	W1101 09:32:40.639871  295979 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:32:40.642823  295979 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-p9f57" [66816764-84ac-43a0-bed4-eb4b5a36d7cc] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.010653233s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-720971 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-720971 addons disable yakd --alsologtostderr -v=1: exit status 11 (323.508493ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:32:45.747614  296052 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:32:45.748473  296052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:45.748525  296052 out.go:374] Setting ErrFile to fd 2...
	I1101 09:32:45.748546  296052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:45.748896  296052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:32:45.749323  296052 mustload.go:66] Loading cluster: addons-720971
	I1101 09:32:45.749816  296052 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:45.749860  296052 addons.go:607] checking whether the cluster is paused
	I1101 09:32:45.750002  296052 config.go:182] Loaded profile config "addons-720971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:45.750052  296052 host.go:66] Checking if "addons-720971" exists ...
	I1101 09:32:45.750576  296052 cli_runner.go:164] Run: docker container inspect addons-720971 --format={{.State.Status}}
	I1101 09:32:45.779862  296052 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:45.779946  296052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-720971
	I1101 09:32:45.803563  296052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/addons-720971/id_rsa Username:docker}
	I1101 09:32:45.908200  296052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:45.908364  296052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:45.950798  296052 cri.go:89] found id: "303b5718995335acf9ac28000dc141e78d5e946f1fd63383b1a41c20e71fdd5a"
	I1101 09:32:45.950819  296052 cri.go:89] found id: "e66b9ccb0c01fa9b8376c95d693c154a9b6d42563570b6ae96f07055f157afa3"
	I1101 09:32:45.950825  296052 cri.go:89] found id: "6cf6775444e13f2383004700ee190dcd2b09bd298af2da6031c027eb5009e06e"
	I1101 09:32:45.950829  296052 cri.go:89] found id: "3f38970b15f053612de6d7c0a0347c1b95934b4b058542ae34f61ccdaa1c127a"
	I1101 09:32:45.950832  296052 cri.go:89] found id: "43580d85746e52b637b9c0943d404df67e46520371e040e1887096d56e3ac5a8"
	I1101 09:32:45.950835  296052 cri.go:89] found id: "8fe3992cfeef6bcbabf177961a8b218a2c63350c35c30bef4b78fc180bc88be1"
	I1101 09:32:45.950839  296052 cri.go:89] found id: "d4f55b3c931444b4f0740f73776f612d8731e2832d115585a09ac7651b81b4d4"
	I1101 09:32:45.950842  296052 cri.go:89] found id: "cee7ed9ce1f56b74a0e3365e487f2dcb93be13bcf4c025d0d9a05b2774d7588d"
	I1101 09:32:45.950845  296052 cri.go:89] found id: "a4e79c5cf7b969750c6aaa81fe7038d487320171712a212c86453afb01f45543"
	I1101 09:32:45.950855  296052 cri.go:89] found id: "86e9c5d9f6cea513731a404c82c29bb19f53da24fd92656973c6d409d0e8201b"
	I1101 09:32:45.950858  296052 cri.go:89] found id: "b30f47b175d57095130450c8056cc1456b28a1c548167eebb8b98bc629b6bbf1"
	I1101 09:32:45.950861  296052 cri.go:89] found id: "e02cb9b41b9b12dfb0903c624042039a0bd773ee74083111f44c3d6d67885cd7"
	I1101 09:32:45.950865  296052 cri.go:89] found id: "8e4b16182fc320f98854e897e9678d81cc10c9b9cfcf75642969c55d344505a2"
	I1101 09:32:45.950868  296052 cri.go:89] found id: "012c36c742b1dda840de7937617e00a3e746d77f9c4fc4d7b29b8e4b6daf7d94"
	I1101 09:32:45.950871  296052 cri.go:89] found id: "c87eccd73057d31df9311b005c8511d06633ff0f677ea62f1e1a3a6f8eeb760c"
	I1101 09:32:45.950878  296052 cri.go:89] found id: "b28d2db9811d791437cc9e580b1793b9e9be74601631c2b89c24209b2bbe0de0"
	I1101 09:32:45.950882  296052 cri.go:89] found id: "1aab4e12b2651fd15cb25b389c70d17fb0d053431f4023d5d0ad482b95f4f4a1"
	I1101 09:32:45.950886  296052 cri.go:89] found id: "fd15c88e36dccc16d92e7c788a26683ebfe440ff9f79848115109fda8e2826ad"
	I1101 09:32:45.950890  296052 cri.go:89] found id: "5d768341f5651e0208d63a36df9c28ce02f3e6c2d6d7c1d85d2ba91d0f7fe147"
	I1101 09:32:45.950893  296052 cri.go:89] found id: "243fa64c167884842947433ab9681cc17515448b3379bb29157390c33119756d"
	I1101 09:32:45.950897  296052 cri.go:89] found id: "4ab2a5f98b253d802c302088c7758142a08dfa9bf277db3417fca0c0308d72e8"
	I1101 09:32:45.950900  296052 cri.go:89] found id: "f1c57c321c0936b9dcbbb2677da76f09341d8d70ced86701ddfb2078df841440"
	I1101 09:32:45.950903  296052 cri.go:89] found id: "74a9b3705b5e1f558af896c8ec9af2d8be85ba58035b660711cfbad63941b7ea"
	I1101 09:32:45.950906  296052 cri.go:89] found id: ""
	I1101 09:32:45.950957  296052 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:32:45.966898  296052 out.go:203] 
	W1101 09:32:45.969887  296052 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:32:45.969914  296052 out.go:285] * 
	* 
	W1101 09:32:45.976334  296052 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:32:45.979195  296052 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-720971 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-034342 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-034342 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9n7nd" [baa6a22a-e8e5-478f-b01f-46c7df7f927c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-034342 -n functional-034342
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-01 09:49:01.488904813 +0000 UTC m=+1232.020561578
functional_test.go:1645: (dbg) Run:  kubectl --context functional-034342 describe po hello-node-connect-7d85dfc575-9n7nd -n default
functional_test.go:1645: (dbg) kubectl --context functional-034342 describe po hello-node-connect-7d85dfc575-9n7nd -n default:
Name:             hello-node-connect-7d85dfc575-9n7nd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-034342/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:39:01 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-474qt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-474qt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9n7nd to functional-034342
Normal   Pulling    7m9s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m44s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m44s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-034342 logs hello-node-connect-7d85dfc575-9n7nd -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-034342 logs hello-node-connect-7d85dfc575-9n7nd -n default: exit status 1 (109.141437ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9n7nd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-034342 logs hello-node-connect-7d85dfc575-9n7nd -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-034342 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-9n7nd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-034342/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:39:01 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-474qt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-474qt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9n7nd to functional-034342
Normal   Pulling    7m9s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m44s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m44s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-034342 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-034342 logs -l app=hello-node-connect: exit status 1 (88.527841ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9n7nd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-034342 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-034342 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.252.252
IPs:                      10.108.252.252
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31800/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-034342
helpers_test.go:243: (dbg) docker inspect functional-034342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52d40408fe127c60bae84586cc45d280984634ea418de16f43f9b7f52d758c3f",
	        "Created": "2025-11-01T09:35:56.285405769Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302859,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:35:56.34516705Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/52d40408fe127c60bae84586cc45d280984634ea418de16f43f9b7f52d758c3f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52d40408fe127c60bae84586cc45d280984634ea418de16f43f9b7f52d758c3f/hostname",
	        "HostsPath": "/var/lib/docker/containers/52d40408fe127c60bae84586cc45d280984634ea418de16f43f9b7f52d758c3f/hosts",
	        "LogPath": "/var/lib/docker/containers/52d40408fe127c60bae84586cc45d280984634ea418de16f43f9b7f52d758c3f/52d40408fe127c60bae84586cc45d280984634ea418de16f43f9b7f52d758c3f-json.log",
	        "Name": "/functional-034342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-034342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-034342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "52d40408fe127c60bae84586cc45d280984634ea418de16f43f9b7f52d758c3f",
	                "LowerDir": "/var/lib/docker/overlay2/d3dda5fd0d961bb127861609b8a27f97ec60f4fafcd2ca1db6b9ee668cf902f2-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3dda5fd0d961bb127861609b8a27f97ec60f4fafcd2ca1db6b9ee668cf902f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3dda5fd0d961bb127861609b8a27f97ec60f4fafcd2ca1db6b9ee668cf902f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3dda5fd0d961bb127861609b8a27f97ec60f4fafcd2ca1db6b9ee668cf902f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-034342",
	                "Source": "/var/lib/docker/volumes/functional-034342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-034342",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-034342",
	                "name.minikube.sigs.k8s.io": "functional-034342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbda6a14f7c144743f95976042956da4a5643eda394264c698aae38f0a8291a9",
	            "SandboxKey": "/var/run/docker/netns/dbda6a14f7c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-034342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:16:fa:7b:c7:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73600224ff75fcced17d4ae16ec4bc6484ac5346b1200255b9598d84e058573f",
	                    "EndpointID": "99cf3d607c4a96ccfbe23fd1aac2b968dbfc18182a315e219cc644eb28188e3f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-034342",
	                        "52d40408fe12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-034342 -n functional-034342
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-034342 logs -n 25: (1.443350277s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-034342 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                  │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ kubectl │ functional-034342 kubectl -- --context functional-034342 get pods                                                        │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ start   │ -p functional-034342 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                 │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ service │ invalid-svc -p functional-034342                                                                                         │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ cp      │ functional-034342 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                       │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ config  │ functional-034342 config unset cpus                                                                                      │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ config  │ functional-034342 config get cpus                                                                                        │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ config  │ functional-034342 config set cpus 2                                                                                      │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ config  │ functional-034342 config get cpus                                                                                        │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ ssh     │ functional-034342 ssh -n functional-034342 sudo cat /home/docker/cp-test.txt                                             │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ config  │ functional-034342 config unset cpus                                                                                      │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ config  │ functional-034342 config get cpus                                                                                        │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ ssh     │ functional-034342 ssh echo hello                                                                                         │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ cp      │ functional-034342 cp functional-034342:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd28841023/001/cp-test.txt │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ ssh     │ functional-034342 ssh cat /etc/hostname                                                                                  │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ ssh     │ functional-034342 ssh -n functional-034342 sudo cat /home/docker/cp-test.txt                                             │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ tunnel  │ functional-034342 tunnel --alsologtostderr                                                                               │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ tunnel  │ functional-034342 tunnel --alsologtostderr                                                                               │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ cp      │ functional-034342 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ ssh     │ functional-034342 ssh -n functional-034342 sudo cat /tmp/does/not/exist/cp-test.txt                                      │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ tunnel  │ functional-034342 tunnel --alsologtostderr                                                                               │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ addons  │ functional-034342 addons list                                                                                            │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:39 UTC │ 01 Nov 25 09:39 UTC │
	│ addons  │ functional-034342 addons list -o json                                                                                    │ functional-034342 │ jenkins │ v1.37.0 │ 01 Nov 25 09:39 UTC │ 01 Nov 25 09:39 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:38:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:38:11.328191  307280 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:38:11.328293  307280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:38:11.328297  307280 out.go:374] Setting ErrFile to fd 2...
	I1101 09:38:11.328301  307280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:38:11.328578  307280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:38:11.328917  307280 out.go:368] Setting JSON to false
	I1101 09:38:11.329847  307280 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4841,"bootTime":1761985051,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:38:11.329904  307280 start.go:143] virtualization:  
	I1101 09:38:11.333498  307280 out.go:179] * [functional-034342] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:38:11.336732  307280 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:38:11.336810  307280 notify.go:221] Checking for updates...
	I1101 09:38:11.340449  307280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:38:11.343567  307280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:38:11.346480  307280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:38:11.349402  307280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:38:11.352393  307280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:38:11.355701  307280 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:38:11.355801  307280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:38:11.381707  307280 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:38:11.381816  307280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:38:11.444334  307280 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-01 09:38:11.435171772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:38:11.444435  307280 docker.go:319] overlay module found
	I1101 09:38:11.447463  307280 out.go:179] * Using the docker driver based on existing profile
	I1101 09:38:11.450347  307280 start.go:309] selected driver: docker
	I1101 09:38:11.450356  307280 start.go:930] validating driver "docker" against &{Name:functional-034342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-034342 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:38:11.450446  307280 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:38:11.450555  307280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:38:11.503639  307280 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-01 09:38:11.493819816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:38:11.504061  307280 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:38:11.504086  307280 cni.go:84] Creating CNI manager for ""
	I1101 09:38:11.504140  307280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:38:11.504182  307280 start.go:353] cluster config:
	{Name:functional-034342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-034342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:38:11.507439  307280 out.go:179] * Starting "functional-034342" primary control-plane node in "functional-034342" cluster
	I1101 09:38:11.510327  307280 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:38:11.513134  307280 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:38:11.516035  307280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:38:11.516082  307280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:38:11.516089  307280 cache.go:59] Caching tarball of preloaded images
	I1101 09:38:11.516153  307280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:38:11.516181  307280 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:38:11.516205  307280 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:38:11.516316  307280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/config.json ...
	I1101 09:38:11.535641  307280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:38:11.535653  307280 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:38:11.535674  307280 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:38:11.535695  307280 start.go:360] acquireMachinesLock for functional-034342: {Name:mkb64e5642ac0a97cd5437db192b3a3a5208dc34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:38:11.535757  307280 start.go:364] duration metric: took 46.491µs to acquireMachinesLock for "functional-034342"
	I1101 09:38:11.535776  307280 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:38:11.535780  307280 fix.go:54] fixHost starting: 
	I1101 09:38:11.536037  307280 cli_runner.go:164] Run: docker container inspect functional-034342 --format={{.State.Status}}
	I1101 09:38:11.552940  307280 fix.go:112] recreateIfNeeded on functional-034342: state=Running err=<nil>
	W1101 09:38:11.552959  307280 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:38:11.556167  307280 out.go:252] * Updating the running docker "functional-034342" container ...
	I1101 09:38:11.556189  307280 machine.go:94] provisionDockerMachine start ...
	I1101 09:38:11.556287  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:11.574263  307280 main.go:143] libmachine: Using SSH client type: native
	I1101 09:38:11.574568  307280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1101 09:38:11.574574  307280 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:38:11.725542  307280 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-034342
	
	I1101 09:38:11.725577  307280 ubuntu.go:182] provisioning hostname "functional-034342"
	I1101 09:38:11.725640  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:11.744599  307280 main.go:143] libmachine: Using SSH client type: native
	I1101 09:38:11.744922  307280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1101 09:38:11.744932  307280 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-034342 && echo "functional-034342" | sudo tee /etc/hostname
	I1101 09:38:11.907392  307280 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-034342
	
	I1101 09:38:11.907458  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:11.926112  307280 main.go:143] libmachine: Using SSH client type: native
	I1101 09:38:11.926440  307280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1101 09:38:11.926454  307280 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-034342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-034342/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-034342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:38:12.091106  307280 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:38:12.091121  307280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:38:12.091150  307280 ubuntu.go:190] setting up certificates
	I1101 09:38:12.091167  307280 provision.go:84] configureAuth start
	I1101 09:38:12.091227  307280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-034342
	I1101 09:38:12.115579  307280 provision.go:143] copyHostCerts
	I1101 09:38:12.115665  307280 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:38:12.115675  307280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:38:12.115748  307280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:38:12.115849  307280 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:38:12.115853  307280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:38:12.115879  307280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:38:12.115972  307280 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:38:12.115976  307280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:38:12.115997  307280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:38:12.116041  307280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.functional-034342 san=[127.0.0.1 192.168.49.2 functional-034342 localhost minikube]
	I1101 09:38:12.299917  307280 provision.go:177] copyRemoteCerts
	I1101 09:38:12.299970  307280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:38:12.300020  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:12.318619  307280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
	I1101 09:38:12.429960  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:38:12.448422  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:38:12.467783  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:38:12.485724  307280 provision.go:87] duration metric: took 394.514487ms to configureAuth
	I1101 09:38:12.485742  307280 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:38:12.485937  307280 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:38:12.486032  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:12.507338  307280 main.go:143] libmachine: Using SSH client type: native
	I1101 09:38:12.507649  307280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1101 09:38:12.507664  307280 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:38:17.875883  307280 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:38:17.875896  307280 machine.go:97] duration metric: took 6.319701719s to provisionDockerMachine
	I1101 09:38:17.875906  307280 start.go:293] postStartSetup for "functional-034342" (driver="docker")
	I1101 09:38:17.875916  307280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:38:17.876008  307280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:38:17.876045  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:17.893463  307280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
	I1101 09:38:17.998958  307280 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:38:18.003844  307280 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:38:18.003863  307280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:38:18.003873  307280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:38:18.003932  307280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:38:18.004021  307280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:38:18.004099  307280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/test/nested/copy/287135/hosts -> hosts in /etc/test/nested/copy/287135
	I1101 09:38:18.004148  307280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/287135
	I1101 09:38:18.012511  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:38:18.031936  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/test/nested/copy/287135/hosts --> /etc/test/nested/copy/287135/hosts (40 bytes)
	I1101 09:38:18.050369  307280 start.go:296] duration metric: took 174.449055ms for postStartSetup
	I1101 09:38:18.050481  307280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:38:18.050523  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:18.068428  307280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
	I1101 09:38:18.171306  307280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:38:18.176565  307280 fix.go:56] duration metric: took 6.640777873s for fixHost
	I1101 09:38:18.176580  307280 start.go:83] releasing machines lock for "functional-034342", held for 6.640815921s
	I1101 09:38:18.176648  307280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-034342
	I1101 09:38:18.194151  307280 ssh_runner.go:195] Run: cat /version.json
	I1101 09:38:18.194194  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:18.194213  307280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:38:18.194300  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:18.218601  307280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
	I1101 09:38:18.219868  307280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
	I1101 09:38:18.412067  307280 ssh_runner.go:195] Run: systemctl --version
	I1101 09:38:18.418916  307280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:38:18.456105  307280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:38:18.460616  307280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:38:18.460673  307280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:38:18.468846  307280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:38:18.468860  307280 start.go:496] detecting cgroup driver to use...
	I1101 09:38:18.468892  307280 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:38:18.468953  307280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:38:18.484587  307280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:38:18.498325  307280 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:38:18.498388  307280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:38:18.514484  307280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:38:18.528005  307280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:38:18.670065  307280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:38:18.804572  307280 docker.go:234] disabling docker service ...
	I1101 09:38:18.804646  307280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:38:18.819898  307280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:38:18.834129  307280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:38:18.963795  307280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:38:19.098432  307280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:38:19.111885  307280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:38:19.126307  307280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:38:19.126375  307280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:38:19.135581  307280 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:38:19.135640  307280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:38:19.144659  307280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:38:19.153384  307280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:38:19.162607  307280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:38:19.170816  307280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:38:19.179754  307280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:38:19.188808  307280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:38:19.205983  307280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:38:19.214507  307280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:38:19.222317  307280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:38:19.362069  307280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:38:19.577830  307280 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:38:19.577898  307280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:38:19.586262  307280 start.go:564] Will wait 60s for crictl version
	I1101 09:38:19.586318  307280 ssh_runner.go:195] Run: which crictl
	I1101 09:38:19.596506  307280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:38:19.622576  307280 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:38:19.622669  307280 ssh_runner.go:195] Run: crio --version
	I1101 09:38:19.651895  307280 ssh_runner.go:195] Run: crio --version
	I1101 09:38:19.685830  307280 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:38:19.688828  307280 cli_runner.go:164] Run: docker network inspect functional-034342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:38:19.705158  307280 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:38:19.712329  307280 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1101 09:38:19.715178  307280 kubeadm.go:884] updating cluster {Name:functional-034342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-034342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:38:19.715307  307280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:38:19.715378  307280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:38:19.749271  307280 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:38:19.749282  307280 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:38:19.749342  307280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:38:19.775876  307280 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:38:19.775888  307280 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:38:19.775895  307280 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1101 09:38:19.775991  307280 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-034342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-034342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:38:19.776076  307280 ssh_runner.go:195] Run: crio config
	I1101 09:38:19.846588  307280 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1101 09:38:19.846614  307280 cni.go:84] Creating CNI manager for ""
	I1101 09:38:19.846625  307280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:38:19.846639  307280 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:38:19.846662  307280 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-034342 NodeName:functional-034342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:38:19.846775  307280 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-034342"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:38:19.846839  307280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:38:19.857727  307280 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:38:19.857808  307280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:38:19.865404  307280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:38:19.878662  307280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:38:19.892324  307280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1101 09:38:19.905263  307280 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:38:19.909392  307280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:38:20.095868  307280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:38:20.111914  307280 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342 for IP: 192.168.49.2
	I1101 09:38:20.111925  307280 certs.go:195] generating shared ca certs ...
	I1101 09:38:20.111939  307280 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:38:20.112076  307280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:38:20.112127  307280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:38:20.112133  307280 certs.go:257] generating profile certs ...
	I1101 09:38:20.112217  307280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.key
	I1101 09:38:20.112261  307280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/apiserver.key.ebeb320a
	I1101 09:38:20.112336  307280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/proxy-client.key
	I1101 09:38:20.112449  307280 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:38:20.112477  307280 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:38:20.112483  307280 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:38:20.112506  307280 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:38:20.112528  307280 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:38:20.112549  307280 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:38:20.112589  307280 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:38:20.113186  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:38:20.132360  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:38:20.151871  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:38:20.171788  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:38:20.190271  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:38:20.208385  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:38:20.225667  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:38:20.244301  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:38:20.261580  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:38:20.278938  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:38:20.296305  307280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:38:20.313707  307280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:38:20.326674  307280 ssh_runner.go:195] Run: openssl version
	I1101 09:38:20.332954  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:38:20.341278  307280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:38:20.344836  307280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:38:20.344897  307280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:38:20.385474  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:38:20.393187  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:38:20.401194  307280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:38:20.404650  307280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:38:20.404722  307280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:38:20.445506  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:38:20.453225  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:38:20.461175  307280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:38:20.464737  307280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:38:20.464791  307280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:38:20.505635  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:38:20.513981  307280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:38:20.517930  307280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:38:20.560093  307280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:38:20.602691  307280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:38:20.644752  307280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:38:20.685895  307280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:38:20.727477  307280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:38:20.769076  307280 kubeadm.go:401] StartCluster: {Name:functional-034342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-034342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:38:20.769150  307280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:38:20.769223  307280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:38:20.799993  307280 cri.go:89] found id: "600497321df3ccf06770ffaccae2152a17a2a3f6811674c7bc446b638b96cffe"
	I1101 09:38:20.800004  307280 cri.go:89] found id: "c9904904b41178564e9d1495d0f89df8b9cdfb0fa818ec254ff10606fe5603a4"
	I1101 09:38:20.800008  307280 cri.go:89] found id: "e9ca322a40c177008e6d0ed51b27111e160574468275da81e04749b06362d906"
	I1101 09:38:20.800010  307280 cri.go:89] found id: "6037e4e7f7488f2850218a1d668b74a79433c8d241e2b6d06473b8b99fe432a1"
	I1101 09:38:20.800013  307280 cri.go:89] found id: "3a2afbc82ad41028419c90d819a2c6b7794b4ac5ed961ffacf63798134b6dbfa"
	I1101 09:38:20.800016  307280 cri.go:89] found id: "350023ad1d835494436a8f8ac8d8ed9e03f373cc9cfe4203c2aed6007dd9a0a3"
	I1101 09:38:20.800018  307280 cri.go:89] found id: "55c18dda446c4fded47a4c9ef89968fbf9b45e3e6ec06738e7d71503e8cd9f63"
	I1101 09:38:20.800021  307280 cri.go:89] found id: "f544e6d6d601e14a101b995c9a0d762d2a740f91a620814723a6c0e788259750"
	I1101 09:38:20.800023  307280 cri.go:89] found id: ""
	I1101 09:38:20.800072  307280 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:38:20.810615  307280 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:38:20Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:38:20.810678  307280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:38:20.818424  307280 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:38:20.818434  307280 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:38:20.818481  307280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:38:20.825662  307280 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:38:20.826210  307280 kubeconfig.go:125] found "functional-034342" server: "https://192.168.49.2:8441"
	I1101 09:38:20.827548  307280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:38:20.835583  307280 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-01 09:36:05.189386580 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-01 09:38:19.899750920 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1101 09:38:20.835593  307280 kubeadm.go:1161] stopping kube-system containers ...
	I1101 09:38:20.835604  307280 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 09:38:20.835658  307280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:38:20.866263  307280 cri.go:89] found id: "600497321df3ccf06770ffaccae2152a17a2a3f6811674c7bc446b638b96cffe"
	I1101 09:38:20.866275  307280 cri.go:89] found id: "c9904904b41178564e9d1495d0f89df8b9cdfb0fa818ec254ff10606fe5603a4"
	I1101 09:38:20.866279  307280 cri.go:89] found id: "e9ca322a40c177008e6d0ed51b27111e160574468275da81e04749b06362d906"
	I1101 09:38:20.866281  307280 cri.go:89] found id: "6037e4e7f7488f2850218a1d668b74a79433c8d241e2b6d06473b8b99fe432a1"
	I1101 09:38:20.866283  307280 cri.go:89] found id: "3a2afbc82ad41028419c90d819a2c6b7794b4ac5ed961ffacf63798134b6dbfa"
	I1101 09:38:20.866286  307280 cri.go:89] found id: "350023ad1d835494436a8f8ac8d8ed9e03f373cc9cfe4203c2aed6007dd9a0a3"
	I1101 09:38:20.866289  307280 cri.go:89] found id: "55c18dda446c4fded47a4c9ef89968fbf9b45e3e6ec06738e7d71503e8cd9f63"
	I1101 09:38:20.866291  307280 cri.go:89] found id: "f544e6d6d601e14a101b995c9a0d762d2a740f91a620814723a6c0e788259750"
	I1101 09:38:20.866293  307280 cri.go:89] found id: ""
	I1101 09:38:20.866298  307280 cri.go:252] Stopping containers: [600497321df3ccf06770ffaccae2152a17a2a3f6811674c7bc446b638b96cffe c9904904b41178564e9d1495d0f89df8b9cdfb0fa818ec254ff10606fe5603a4 e9ca322a40c177008e6d0ed51b27111e160574468275da81e04749b06362d906 6037e4e7f7488f2850218a1d668b74a79433c8d241e2b6d06473b8b99fe432a1 3a2afbc82ad41028419c90d819a2c6b7794b4ac5ed961ffacf63798134b6dbfa 350023ad1d835494436a8f8ac8d8ed9e03f373cc9cfe4203c2aed6007dd9a0a3 55c18dda446c4fded47a4c9ef89968fbf9b45e3e6ec06738e7d71503e8cd9f63 f544e6d6d601e14a101b995c9a0d762d2a740f91a620814723a6c0e788259750]
	I1101 09:38:20.866358  307280 ssh_runner.go:195] Run: which crictl
	I1101 09:38:20.870180  307280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 600497321df3ccf06770ffaccae2152a17a2a3f6811674c7bc446b638b96cffe c9904904b41178564e9d1495d0f89df8b9cdfb0fa818ec254ff10606fe5603a4 e9ca322a40c177008e6d0ed51b27111e160574468275da81e04749b06362d906 6037e4e7f7488f2850218a1d668b74a79433c8d241e2b6d06473b8b99fe432a1 3a2afbc82ad41028419c90d819a2c6b7794b4ac5ed961ffacf63798134b6dbfa 350023ad1d835494436a8f8ac8d8ed9e03f373cc9cfe4203c2aed6007dd9a0a3 55c18dda446c4fded47a4c9ef89968fbf9b45e3e6ec06738e7d71503e8cd9f63 f544e6d6d601e14a101b995c9a0d762d2a740f91a620814723a6c0e788259750
	I1101 09:38:20.937794  307280 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 09:38:21.056625  307280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:38:21.064765  307280 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov  1 09:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov  1 09:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov  1 09:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov  1 09:36 /etc/kubernetes/scheduler.conf
	
	I1101 09:38:21.064827  307280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1101 09:38:21.073068  307280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1101 09:38:21.080509  307280 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:38:21.080560  307280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:38:21.088107  307280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1101 09:38:21.095843  307280 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:38:21.095931  307280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:38:21.104147  307280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1101 09:38:21.112980  307280 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:38:21.113039  307280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:38:21.121385  307280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:38:21.129626  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:38:21.176076  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:38:23.499821  307280 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.323719729s)
	I1101 09:38:23.499881  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:38:23.716525  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:38:23.783305  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:38:23.871734  307280 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:38:23.871799  307280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:38:24.372197  307280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:38:24.872841  307280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:38:24.893295  307280 api_server.go:72] duration metric: took 1.021571043s to wait for apiserver process to appear ...
	I1101 09:38:24.893310  307280 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:38:24.893334  307280 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:38:28.407720  307280 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:38:28.407738  307280 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:38:28.407750  307280 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:38:28.467710  307280 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:38:28.467727  307280 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:38:28.894279  307280 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:38:28.925634  307280 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:38:28.925653  307280 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:38:29.393765  307280 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:38:29.424923  307280 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:38:29.424943  307280 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:38:29.893470  307280 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:38:29.903166  307280 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1101 09:38:29.917524  307280 api_server.go:141] control plane version: v1.34.1
	I1101 09:38:29.917540  307280 api_server.go:131] duration metric: took 5.024225725s to wait for apiserver health ...
	I1101 09:38:29.917548  307280 cni.go:84] Creating CNI manager for ""
	I1101 09:38:29.917554  307280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:38:29.921498  307280 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:38:29.924439  307280 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:38:29.928876  307280 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:38:29.928887  307280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:38:29.941926  307280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:38:30.494453  307280 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:38:30.498078  307280 system_pods.go:59] 8 kube-system pods found
	I1101 09:38:30.498098  307280 system_pods.go:61] "coredns-66bc5c9577-jxp7x" [5dfa3991-1bca-4dd3-80e3-7c1626dbff0d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:38:30.498105  307280 system_pods.go:61] "etcd-functional-034342" [d875a82c-5f07-4410-87eb-5f5b7744f05f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:38:30.498109  307280 system_pods.go:61] "kindnet-6qnvd" [d5050928-f244-48b5-afea-12d2e4ffd401] Running
	I1101 09:38:30.498117  307280 system_pods.go:61] "kube-apiserver-functional-034342" [e179b679-e8a1-4d60-9103-8aa9dab089d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:38:30.498123  307280 system_pods.go:61] "kube-controller-manager-functional-034342" [f58b4951-eedb-40b4-a3a6-9081cf3fcdbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:38:30.498127  307280 system_pods.go:61] "kube-proxy-2spnh" [fb360c22-3ad2-43f4-a324-74ed67c3abd4] Running
	I1101 09:38:30.498133  307280 system_pods.go:61] "kube-scheduler-functional-034342" [0e424ec7-32de-484d-a658-3aae3b310dd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:38:30.498136  307280 system_pods.go:61] "storage-provisioner" [919369ab-c944-45f4-ad3c-b1c412220f33] Running
	I1101 09:38:30.498140  307280 system_pods.go:74] duration metric: took 3.676869ms to wait for pod list to return data ...
	I1101 09:38:30.498147  307280 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:38:30.500984  307280 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:38:30.501003  307280 node_conditions.go:123] node cpu capacity is 2
	I1101 09:38:30.501013  307280 node_conditions.go:105] duration metric: took 2.862546ms to run NodePressure ...
	I1101 09:38:30.501084  307280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:38:30.772181  307280 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 09:38:30.776376  307280 kubeadm.go:744] kubelet initialised
	I1101 09:38:30.776388  307280 kubeadm.go:745] duration metric: took 4.194195ms waiting for restarted kubelet to initialise ...
	I1101 09:38:30.776402  307280 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:38:30.785904  307280 ops.go:34] apiserver oom_adj: -16
	I1101 09:38:30.785915  307280 kubeadm.go:602] duration metric: took 9.967476792s to restartPrimaryControlPlane
	I1101 09:38:30.785922  307280 kubeadm.go:403] duration metric: took 10.016856696s to StartCluster
	I1101 09:38:30.785937  307280 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:38:30.786001  307280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:38:30.786716  307280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:38:30.786959  307280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:38:30.787160  307280 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:38:30.787194  307280 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:38:30.787252  307280 addons.go:70] Setting storage-provisioner=true in profile "functional-034342"
	I1101 09:38:30.787266  307280 addons.go:239] Setting addon storage-provisioner=true in "functional-034342"
	W1101 09:38:30.787270  307280 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:38:30.787290  307280 host.go:66] Checking if "functional-034342" exists ...
	I1101 09:38:30.787725  307280 cli_runner.go:164] Run: docker container inspect functional-034342 --format={{.State.Status}}
	I1101 09:38:30.788174  307280 addons.go:70] Setting default-storageclass=true in profile "functional-034342"
	I1101 09:38:30.788190  307280 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-034342"
	I1101 09:38:30.788453  307280 cli_runner.go:164] Run: docker container inspect functional-034342 --format={{.State.Status}}
	I1101 09:38:30.791299  307280 out.go:179] * Verifying Kubernetes components...
	I1101 09:38:30.794369  307280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:38:30.815294  307280 addons.go:239] Setting addon default-storageclass=true in "functional-034342"
	W1101 09:38:30.815305  307280 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:38:30.815339  307280 host.go:66] Checking if "functional-034342" exists ...
	I1101 09:38:30.815843  307280 cli_runner.go:164] Run: docker container inspect functional-034342 --format={{.State.Status}}
	I1101 09:38:30.824354  307280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:38:30.827214  307280 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:38:30.827226  307280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:38:30.827288  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:30.837718  307280 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:38:30.837730  307280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:38:30.837796  307280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:38:30.865821  307280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
	I1101 09:38:30.870877  307280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
	I1101 09:38:31.008480  307280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:38:31.027126  307280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:38:31.061154  307280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:38:31.942841  307280 node_ready.go:35] waiting up to 6m0s for node "functional-034342" to be "Ready" ...
	I1101 09:38:31.946185  307280 node_ready.go:49] node "functional-034342" is "Ready"
	I1101 09:38:31.946200  307280 node_ready.go:38] duration metric: took 3.342414ms for node "functional-034342" to be "Ready" ...
	I1101 09:38:31.946212  307280 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:38:31.946274  307280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:38:31.953898  307280 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:38:31.956898  307280 addons.go:515] duration metric: took 1.169680473s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:38:31.960699  307280 api_server.go:72] duration metric: took 1.173712343s to wait for apiserver process to appear ...
	I1101 09:38:31.960733  307280 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:38:31.960753  307280 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 09:38:31.970277  307280 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1101 09:38:31.971192  307280 api_server.go:141] control plane version: v1.34.1
	I1101 09:38:31.971203  307280 api_server.go:131] duration metric: took 10.465446ms to wait for apiserver health ...
	I1101 09:38:31.971211  307280 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:38:31.974894  307280 system_pods.go:59] 8 kube-system pods found
	I1101 09:38:31.974928  307280 system_pods.go:61] "coredns-66bc5c9577-jxp7x" [5dfa3991-1bca-4dd3-80e3-7c1626dbff0d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:38:31.974937  307280 system_pods.go:61] "etcd-functional-034342" [d875a82c-5f07-4410-87eb-5f5b7744f05f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:38:31.974942  307280 system_pods.go:61] "kindnet-6qnvd" [d5050928-f244-48b5-afea-12d2e4ffd401] Running
	I1101 09:38:31.974948  307280 system_pods.go:61] "kube-apiserver-functional-034342" [e179b679-e8a1-4d60-9103-8aa9dab089d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:38:31.974954  307280 system_pods.go:61] "kube-controller-manager-functional-034342" [f58b4951-eedb-40b4-a3a6-9081cf3fcdbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:38:31.974958  307280 system_pods.go:61] "kube-proxy-2spnh" [fb360c22-3ad2-43f4-a324-74ed67c3abd4] Running
	I1101 09:38:31.974967  307280 system_pods.go:61] "kube-scheduler-functional-034342" [0e424ec7-32de-484d-a658-3aae3b310dd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:38:31.974970  307280 system_pods.go:61] "storage-provisioner" [919369ab-c944-45f4-ad3c-b1c412220f33] Running
	I1101 09:38:31.974974  307280 system_pods.go:74] duration metric: took 3.759538ms to wait for pod list to return data ...
	I1101 09:38:31.974981  307280 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:38:31.977512  307280 default_sa.go:45] found service account: "default"
	I1101 09:38:31.977523  307280 default_sa.go:55] duration metric: took 2.53779ms for default service account to be created ...
	I1101 09:38:31.977530  307280 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:38:31.981335  307280 system_pods.go:86] 8 kube-system pods found
	I1101 09:38:31.981353  307280 system_pods.go:89] "coredns-66bc5c9577-jxp7x" [5dfa3991-1bca-4dd3-80e3-7c1626dbff0d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:38:31.981360  307280 system_pods.go:89] "etcd-functional-034342" [d875a82c-5f07-4410-87eb-5f5b7744f05f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:38:31.981364  307280 system_pods.go:89] "kindnet-6qnvd" [d5050928-f244-48b5-afea-12d2e4ffd401] Running
	I1101 09:38:31.981370  307280 system_pods.go:89] "kube-apiserver-functional-034342" [e179b679-e8a1-4d60-9103-8aa9dab089d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:38:31.981376  307280 system_pods.go:89] "kube-controller-manager-functional-034342" [f58b4951-eedb-40b4-a3a6-9081cf3fcdbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:38:31.981379  307280 system_pods.go:89] "kube-proxy-2spnh" [fb360c22-3ad2-43f4-a324-74ed67c3abd4] Running
	I1101 09:38:31.981384  307280 system_pods.go:89] "kube-scheduler-functional-034342" [0e424ec7-32de-484d-a658-3aae3b310dd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:38:31.981387  307280 system_pods.go:89] "storage-provisioner" [919369ab-c944-45f4-ad3c-b1c412220f33] Running
	I1101 09:38:31.981393  307280 system_pods.go:126] duration metric: took 3.858985ms to wait for k8s-apps to be running ...
	I1101 09:38:31.981400  307280 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:38:31.981462  307280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:38:31.994169  307280 system_svc.go:56] duration metric: took 12.75876ms WaitForService to wait for kubelet
	I1101 09:38:31.994188  307280 kubeadm.go:587] duration metric: took 1.207206651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:38:31.994205  307280 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:38:31.998220  307280 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:38:31.998237  307280 node_conditions.go:123] node cpu capacity is 2
	I1101 09:38:31.998250  307280 node_conditions.go:105] duration metric: took 4.039099ms to run NodePressure ...
	I1101 09:38:31.998262  307280 start.go:242] waiting for startup goroutines ...
	I1101 09:38:31.998269  307280 start.go:247] waiting for cluster config update ...
	I1101 09:38:31.998280  307280 start.go:256] writing updated cluster config ...
	I1101 09:38:31.998619  307280 ssh_runner.go:195] Run: rm -f paused
	I1101 09:38:32.003715  307280 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:38:32.007845  307280 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jxp7x" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:38:34.014085  307280 pod_ready.go:104] pod "coredns-66bc5c9577-jxp7x" is not "Ready", error: <nil>
	I1101 09:38:36.013409  307280 pod_ready.go:94] pod "coredns-66bc5c9577-jxp7x" is "Ready"
	I1101 09:38:36.013426  307280 pod_ready.go:86] duration metric: took 4.005565955s for pod "coredns-66bc5c9577-jxp7x" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:38:36.016588  307280 pod_ready.go:83] waiting for pod "etcd-functional-034342" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:38:38.022344  307280 pod_ready.go:104] pod "etcd-functional-034342" is not "Ready", error: <nil>
	W1101 09:38:40.024767  307280 pod_ready.go:104] pod "etcd-functional-034342" is not "Ready", error: <nil>
	I1101 09:38:41.022146  307280 pod_ready.go:94] pod "etcd-functional-034342" is "Ready"
	I1101 09:38:41.022162  307280 pod_ready.go:86] duration metric: took 5.005560591s for pod "etcd-functional-034342" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:38:41.024562  307280 pod_ready.go:83] waiting for pod "kube-apiserver-functional-034342" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:38:42.029898  307280 pod_ready.go:94] pod "kube-apiserver-functional-034342" is "Ready"
	I1101 09:38:42.029913  307280 pod_ready.go:86] duration metric: took 1.005334174s for pod "kube-apiserver-functional-034342" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:38:42.032438  307280 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-034342" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:38:42.042410  307280 pod_ready.go:94] pod "kube-controller-manager-functional-034342" is "Ready"
	I1101 09:38:42.042424  307280 pod_ready.go:86] duration metric: took 9.973408ms for pod "kube-controller-manager-functional-034342" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:38:42.044963  307280 pod_ready.go:83] waiting for pod "kube-proxy-2spnh" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:38:42.050632  307280 pod_ready.go:94] pod "kube-proxy-2spnh" is "Ready"
	I1101 09:38:42.050646  307280 pod_ready.go:86] duration metric: took 5.670798ms for pod "kube-proxy-2spnh" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:38:42.221316  307280 pod_ready.go:83] waiting for pod "kube-scheduler-functional-034342" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:38:42.620730  307280 pod_ready.go:94] pod "kube-scheduler-functional-034342" is "Ready"
	I1101 09:38:42.620744  307280 pod_ready.go:86] duration metric: took 399.411761ms for pod "kube-scheduler-functional-034342" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:38:42.620754  307280 pod_ready.go:40] duration metric: took 10.617003851s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:38:42.686758  307280 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:38:42.689961  307280 out.go:179] * Done! kubectl is now configured to use "functional-034342" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:39:18 functional-034342 crio[3774]: time="2025-11-01T09:39:18.890986047Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-bhdwr Namespace:default ID:5e465e260c910bfa909bb44d60e0c9dcbdfdbaf140b66a125b0873c5410252c8 UID:5a8d2ab5-b959-4fd7-aab2-2f1c817fe403 NetNS:/var/run/netns/802d1a21-7e47-4b85-b67b-29a700be5e8c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000785a8}] Aliases:map[]}"
	Nov 01 09:39:18 functional-034342 crio[3774]: time="2025-11-01T09:39:18.891132831Z" level=info msg="Checking pod default_hello-node-75c85bcc94-bhdwr for CNI network kindnet (type=ptp)"
	Nov 01 09:39:18 functional-034342 crio[3774]: time="2025-11-01T09:39:18.894751745Z" level=info msg="Ran pod sandbox 5e465e260c910bfa909bb44d60e0c9dcbdfdbaf140b66a125b0873c5410252c8 with infra container: default/hello-node-75c85bcc94-bhdwr/POD" id=1fcde340-5f35-4388-acf2-0f16858a654f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:39:18 functional-034342 crio[3774]: time="2025-11-01T09:39:18.896332677Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=18b8087e-650a-44cb-9d47-98e42b0169cf name=/runtime.v1.ImageService/PullImage
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.866032298Z" level=info msg="Stopping pod sandbox: 9a9bfa7274a2f888e6267daa4b192c21f96dcaa7f3bfd36b483f44ac3ef40350" id=aea98903-5842-4f08-b310-568cc30190f3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.866089883Z" level=info msg="Stopped pod sandbox (already stopped): 9a9bfa7274a2f888e6267daa4b192c21f96dcaa7f3bfd36b483f44ac3ef40350" id=aea98903-5842-4f08-b310-568cc30190f3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.866567585Z" level=info msg="Removing pod sandbox: 9a9bfa7274a2f888e6267daa4b192c21f96dcaa7f3bfd36b483f44ac3ef40350" id=ff3cfb9f-ea43-4311-b3b9-a6f535a059df name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.870182855Z" level=info msg="Removed pod sandbox: 9a9bfa7274a2f888e6267daa4b192c21f96dcaa7f3bfd36b483f44ac3ef40350" id=ff3cfb9f-ea43-4311-b3b9-a6f535a059df name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.87087771Z" level=info msg="Stopping pod sandbox: 8db3a39609e7dc61f4adb518ac93452149ea445df6f05c2ec8c1e8856645dcb1" id=1fd4c283-6b2c-4cd9-98d4-0780a4da23a3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.870950293Z" level=info msg="Stopped pod sandbox (already stopped): 8db3a39609e7dc61f4adb518ac93452149ea445df6f05c2ec8c1e8856645dcb1" id=1fd4c283-6b2c-4cd9-98d4-0780a4da23a3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.871496854Z" level=info msg="Removing pod sandbox: 8db3a39609e7dc61f4adb518ac93452149ea445df6f05c2ec8c1e8856645dcb1" id=d9926bce-7794-46f4-92ad-4e5f2b46ae96 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.875005522Z" level=info msg="Removed pod sandbox: 8db3a39609e7dc61f4adb518ac93452149ea445df6f05c2ec8c1e8856645dcb1" id=d9926bce-7794-46f4-92ad-4e5f2b46ae96 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.875446021Z" level=info msg="Stopping pod sandbox: 5b3e0f242c6ad6c0654a2952d3c72582fe09b8b8056c633faa05b2cc7484cf7c" id=963f8880-8c98-47ef-b228-e7277787397c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.875532677Z" level=info msg="Stopped pod sandbox (already stopped): 5b3e0f242c6ad6c0654a2952d3c72582fe09b8b8056c633faa05b2cc7484cf7c" id=963f8880-8c98-47ef-b228-e7277787397c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.875893044Z" level=info msg="Removing pod sandbox: 5b3e0f242c6ad6c0654a2952d3c72582fe09b8b8056c633faa05b2cc7484cf7c" id=353375e4-cda9-4bfa-b904-70e1a724b65f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:39:23 functional-034342 crio[3774]: time="2025-11-01T09:39:23.879408628Z" level=info msg="Removed pod sandbox: 5b3e0f242c6ad6c0654a2952d3c72582fe09b8b8056c633faa05b2cc7484cf7c" id=353375e4-cda9-4bfa-b904-70e1a724b65f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:39:31 functional-034342 crio[3774]: time="2025-11-01T09:39:31.85200321Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=940ab75e-5178-4452-8090-3ce718c78bc8 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:39:41 functional-034342 crio[3774]: time="2025-11-01T09:39:41.851505434Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=590115d9-2bb0-4405-aea6-3f45c5361254 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:40:01 functional-034342 crio[3774]: time="2025-11-01T09:40:01.851035617Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=87c6a575-d245-4e33-b84d-e3a998432bcd name=/runtime.v1.ImageService/PullImage
	Nov 01 09:40:21 functional-034342 crio[3774]: time="2025-11-01T09:40:21.853322605Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4f607da1-e4f9-4c09-8847-d1c176330f1e name=/runtime.v1.ImageService/PullImage
	Nov 01 09:40:52 functional-034342 crio[3774]: time="2025-11-01T09:40:52.850901455Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=76c253de-5be4-4c59-8256-6148dbc91056 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:41:52 functional-034342 crio[3774]: time="2025-11-01T09:41:52.851563085Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=49baf190-c59d-42d8-8fa7-c5ada6ed0669 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:42:20 functional-034342 crio[3774]: time="2025-11-01T09:42:20.850699147Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7e913ae9-7dfd-4c40-9fd1-e42aec9a12da name=/runtime.v1.ImageService/PullImage
	Nov 01 09:44:45 functional-034342 crio[3774]: time="2025-11-01T09:44:45.852444108Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3214d4d0-70df-48e0-a274-1461f7af1900 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:45:15 functional-034342 crio[3774]: time="2025-11-01T09:45:15.85108991Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=89631849-e398-4655-a49e-c4afcd98c376 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	dc540245174a4       docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424   9 minutes ago       Running             myfrontend                0                   0c53233938bfe       sp-pod                                      default
	8d05f0813b5f9       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   93b62b8371463       nginx-svc                                   default
	3d195b0de7f23       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       4                   ecffb0947a87c       storage-provisioner                         kube-system
	c3ffa9bf3a95e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   c6ab92a08f370       kube-proxy-2spnh                            kube-system
	91750d2198c4a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   3                   f1f5a79fd0a76       coredns-66bc5c9577-jxp7x                    kube-system
	1bf98430a8d0f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   c5533ab784676       kindnet-6qnvd                               kube-system
	443ef0db28b95       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   a174f363fcb65       kube-apiserver-functional-034342            kube-system
	80fe54275b772       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   ef11106692bfe       kube-controller-manager-functional-034342   kube-system
	a56b4269e6b96       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   d63cf63000160       etcd-functional-034342                      kube-system
	f95b1d66a7afc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   6246f374380ca       kube-scheduler-functional-034342            kube-system
	600497321df3c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       3                   ecffb0947a87c       storage-provisioner                         kube-system
	c9904904b4117       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      2                   d63cf63000160       etcd-functional-034342                      kube-system
	6037e4e7f7488       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   2                   ef11106692bfe       kube-controller-manager-functional-034342   kube-system
	3a2afbc82ad41       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   6246f374380ca       kube-scheduler-functional-034342            kube-system
	350023ad1d835       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   c5533ab784676       kindnet-6qnvd                               kube-system
	55c18dda446c4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                2                   c6ab92a08f370       kube-proxy-2spnh                            kube-system
	f544e6d6d601e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   2                   f1f5a79fd0a76       coredns-66bc5c9577-jxp7x                    kube-system
	
	
	==> coredns [91750d2198c4a28902cb1baaad0565e71a91242b69eaf1f187123f73739a2ed5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49273 - 54537 "HINFO IN 6424166390706626837.1588275644751157920. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02524508s
	
	
	==> coredns [f544e6d6d601e14a101b995c9a0d762d2a740f91a620814723a6c0e788259750] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33606 - 58377 "HINFO IN 1534657079463350645.1400055606101627283. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023265097s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-034342
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-034342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=functional-034342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_36_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:36:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-034342
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:48:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:47:52 +0000   Sat, 01 Nov 2025 09:36:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:47:52 +0000   Sat, 01 Nov 2025 09:36:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:47:52 +0000   Sat, 01 Nov 2025 09:36:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:47:52 +0000   Sat, 01 Nov 2025 09:37:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-034342
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                8b7ec0d9-c23f-4993-a77f-f05310ad6682
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bhdwr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  default                     hello-node-connect-7d85dfc575-9n7nd          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-jxp7x                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-034342                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-6qnvd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-034342             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-034342    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2spnh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-034342             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-034342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-034342 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-034342 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-034342 event: Registered Node functional-034342 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-034342 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-034342 event: Registered Node functional-034342 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-034342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-034342 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-034342 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-034342 event: Registered Node functional-034342 in Controller
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014572] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501039] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033197] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753566] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.779214] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:03] hrtimer: interrupt took 8309137 ns
	[Nov 1 09:28] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[  +0.061702] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 1 09:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a56b4269e6b9638f98fcf7063b7ca9dcdb9c59dbeb1cb09e7044bd8baf83066b] <==
	{"level":"warn","ts":"2025-11-01T09:38:26.626129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.651935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.689994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.725855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.745686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.770525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.796071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.847856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.867014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.888772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.926622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.944311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:26.977744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:27.006556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:27.027502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:27.064041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:27.085543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:27.105926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:27.133855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:27.173890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:27.198555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:38:27.238104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33384","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:48:25.433907Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1125}
	{"level":"info","ts":"2025-11-01T09:48:25.457207Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1125,"took":"22.928647ms","hash":4071045657,"current-db-size-bytes":3203072,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1425408,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-01T09:48:25.457274Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4071045657,"revision":1125,"compact-revision":-1}
	
	
	==> etcd [c9904904b41178564e9d1495d0f89df8b9cdfb0fa818ec254ff10606fe5603a4] <==
	{"level":"warn","ts":"2025-11-01T09:37:47.089468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:47.102765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:47.122483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:47.169832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:47.197948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:47.216760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:47.269009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37838","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:38:12.688261Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:38:12.688344Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-034342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T09:38:12.688452Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:38:12.835960Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:38:12.837508Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:38:12.837564Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:38:12.837612Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:38:12.837622Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:38:12.837602Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-01T09:38:12.837810Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T09:38:12.837858Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:38:12.837677Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:38:12.837934Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:38:12.837969Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:38:12.841936Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T09:38:12.842016Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:38:12.842050Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T09:38:12.842057Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-034342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:49:03 up  1:31,  0 user,  load average: 0.21, 0.52, 1.45
	Linux functional-034342 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1bf98430a8d0f5a0b6dd9ab767d780c93ceb0c0d3b32f5ea7880a39c920b08ff] <==
	I1101 09:46:59.558424       1 main.go:301] handling current node
	I1101 09:47:09.558776       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:47:09.558813       1 main.go:301] handling current node
	I1101 09:47:19.557983       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:47:19.558019       1 main.go:301] handling current node
	I1101 09:47:29.557912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:47:29.558020       1 main.go:301] handling current node
	I1101 09:47:39.558712       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:47:39.558820       1 main.go:301] handling current node
	I1101 09:47:49.557964       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:47:49.558002       1 main.go:301] handling current node
	I1101 09:47:59.558742       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:47:59.558776       1 main.go:301] handling current node
	I1101 09:48:09.558482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:48:09.558514       1 main.go:301] handling current node
	I1101 09:48:19.558186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:48:19.558220       1 main.go:301] handling current node
	I1101 09:48:29.558448       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:48:29.558479       1 main.go:301] handling current node
	I1101 09:48:39.561774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:48:39.561813       1 main.go:301] handling current node
	I1101 09:48:49.558314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:48:49.558351       1 main.go:301] handling current node
	I1101 09:48:59.558895       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:48:59.559008       1 main.go:301] handling current node
	
	
	==> kindnet [350023ad1d835494436a8f8ac8d8ed9e03f373cc9cfe4203c2aed6007dd9a0a3] <==
	I1101 09:37:40.419030       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:37:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:37:40.622597       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:37:40.622628       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:37:40.622638       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1101 09:37:40.622981       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 09:37:40.622997       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:37:40.623111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:37:40.623181       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:37:40.623209       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:37:41.565325       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:37:41.607292       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:37:42.194898       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:37:42.209115       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:37:43.912234       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:37:44.048451       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:37:44.966200       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:37:45.058244       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1101 09:37:49.422841       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:37:49.423007       1 metrics.go:72] Registering metrics
	I1101 09:37:49.423115       1 controller.go:711] "Syncing nftables rules"
	I1101 09:38:00.623122       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:38:00.623289       1 main.go:301] handling current node
	I1101 09:38:10.628877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:38:10.628922       1 main.go:301] handling current node
	
	
	==> kube-apiserver [443ef0db28b9542aad8a4d825a15542db8848079a970c482b9f1d8fb088b1eb1] <==
	I1101 09:38:28.508894       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:38:28.514638       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:38:28.514773       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:38:28.514884       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:38:28.514899       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:38:28.514906       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:38:28.514911       1 cache.go:39] Caches are synced for autoregister controller
	E1101 09:38:28.527093       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:38:28.535619       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:38:28.554226       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:38:28.860459       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:38:29.317105       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:38:30.487105       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:38:30.633834       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:38:30.699745       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:38:30.707819       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:38:31.309086       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:38:31.582001       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:38:31.633563       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:38:46.158982       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.203.146"}
	I1101 09:38:52.379828       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.106.236"}
	I1101 09:39:01.127976       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.252.252"}
	E1101 09:39:18.443532       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39928: use of closed network connection
	I1101 09:39:18.644030       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.86.37"}
	I1101 09:48:28.475781       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6037e4e7f7488f2850218a1d668b74a79433c8d241e2b6d06473b8b99fe432a1] <==
	I1101 09:37:51.256570       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:37:51.259431       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:37:51.261850       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:37:51.263978       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:37:51.267224       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:37:51.269355       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:37:51.270548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:37:51.270614       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:37:51.271793       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:37:51.275966       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:37:51.280096       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:37:51.291751       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:37:51.291845       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:37:51.291893       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:37:51.291768       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:37:51.292213       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:37:51.293406       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:37:51.296800       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:37:51.297832       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:37:51.300068       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:37:51.302316       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:37:51.309871       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:37:51.309910       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:37:51.309920       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:37:51.313529       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-controller-manager [80fe54275b772e3e55e88191699c62e449f4a005c957aea09027fe258b8985a2] <==
	I1101 09:38:31.301761       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:38:31.301804       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:38:31.301986       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:38:31.304757       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:38:31.316027       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:38:31.321042       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:38:31.325079       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:38:31.325137       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:38:31.325204       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:38:31.325996       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:38:31.327871       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:38:31.328038       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:38:31.331223       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:38:31.331392       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:38:31.338060       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:38:31.343747       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:38:31.343897       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:38:31.343968       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:38:31.348692       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:38:31.348788       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:38:31.350278       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:38:31.357786       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:38:31.364269       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:38:31.364379       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:38:31.364411       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [55c18dda446c4fded47a4c9ef89968fbf9b45e3e6ec06738e7d71503e8cd9f63] <==
	I1101 09:37:39.277919       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:37:39.383619       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1101 09:37:39.384433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-034342&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:37:40.333557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-034342&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:37:42.977096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-034342&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 09:37:48.084163       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:37:48.084200       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:37:48.084300       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:37:48.115091       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:37:48.115168       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:37:48.121559       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:37:48.122120       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:37:48.122201       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:37:48.124345       1 config.go:200] "Starting service config controller"
	I1101 09:37:48.124431       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:37:48.124477       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:37:48.124506       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:37:48.124541       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:37:48.124568       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:37:48.125267       1 config.go:309] "Starting node config controller"
	I1101 09:37:48.126211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:37:48.126297       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:37:48.225313       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:37:48.225315       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:37:48.225364       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c3ffa9bf3a95e942200dbfe76bab3b6151a86b23ba3e75c742c9e87b740a7705] <==
	I1101 09:38:29.342586       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:38:29.739301       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:38:29.840093       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:38:29.841852       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:38:29.842111       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:38:29.923624       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:38:29.923735       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:38:29.974157       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:38:29.974450       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:38:29.974465       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:38:29.987425       1 config.go:200] "Starting service config controller"
	I1101 09:38:29.996116       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:38:29.996181       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:38:29.996187       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:38:29.996211       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:38:29.996218       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:38:29.997061       1 config.go:309] "Starting node config controller"
	I1101 09:38:29.997073       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:38:29.997081       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:38:30.097501       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:38:30.097540       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:38:30.097578       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3a2afbc82ad41028419c90d819a2c6b7794b4ac5ed961ffacf63798134b6dbfa] <==
	E1101 09:37:47.988382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:37:47.989336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:37:47.989452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:37:47.989529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:37:47.989609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:37:47.989680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:37:47.989619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:37:47.989768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:37:47.989839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:37:47.989900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:37:47.989912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:37:47.989964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:37:47.990037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:37:47.990151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:37:47.993332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:37:47.993633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:37:47.993945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:37:47.998155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 09:37:48.970082       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:38:12.682257       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:38:12.682372       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:38:12.682400       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:38:12.682422       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:38:12.682713       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:38:12.682730       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f95b1d66a7afccd56f3e73d3c3a3e1541213597ff616e3bc6716e58d22ca18f5] <==
	I1101 09:38:25.578936       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:38:29.788751       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:38:29.788851       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:38:29.797740       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:38:29.797883       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:38:29.797967       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:38:29.798010       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:38:29.798055       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:38:29.798095       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:38:29.798168       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:38:29.798242       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:38:29.900389       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:38:29.900627       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:38:29.901631       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:46:19 functional-034342 kubelet[4094]: E1101 09:46:19.851151    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:46:21 functional-034342 kubelet[4094]: E1101 09:46:21.852050    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:46:32 functional-034342 kubelet[4094]: E1101 09:46:32.850365    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:46:36 functional-034342 kubelet[4094]: E1101 09:46:36.851343    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:46:46 functional-034342 kubelet[4094]: E1101 09:46:46.850907    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:46:48 functional-034342 kubelet[4094]: E1101 09:46:48.850338    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:47:01 functional-034342 kubelet[4094]: E1101 09:47:01.850973    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:47:03 functional-034342 kubelet[4094]: E1101 09:47:03.852235    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:47:15 functional-034342 kubelet[4094]: E1101 09:47:15.853072    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:47:16 functional-034342 kubelet[4094]: E1101 09:47:16.850385    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:47:27 functional-034342 kubelet[4094]: E1101 09:47:27.852447    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:47:30 functional-034342 kubelet[4094]: E1101 09:47:30.850957    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:47:42 functional-034342 kubelet[4094]: E1101 09:47:42.850431    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:47:42 functional-034342 kubelet[4094]: E1101 09:47:42.850459    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:47:54 functional-034342 kubelet[4094]: E1101 09:47:54.850701    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:47:57 functional-034342 kubelet[4094]: E1101 09:47:57.850697    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:48:08 functional-034342 kubelet[4094]: E1101 09:48:08.851278    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:48:12 functional-034342 kubelet[4094]: E1101 09:48:12.850831    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:48:21 functional-034342 kubelet[4094]: E1101 09:48:21.851894    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:48:23 functional-034342 kubelet[4094]: E1101 09:48:23.852851    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:48:33 functional-034342 kubelet[4094]: E1101 09:48:33.851198    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:48:37 functional-034342 kubelet[4094]: E1101 09:48:37.850634    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:48:48 functional-034342 kubelet[4094]: E1101 09:48:48.851193    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	Nov 01 09:48:52 functional-034342 kubelet[4094]: E1101 09:48:52.850798    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bhdwr" podUID="5a8d2ab5-b959-4fd7-aab2-2f1c817fe403"
	Nov 01 09:49:02 functional-034342 kubelet[4094]: E1101 09:49:02.850231    4094 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9n7nd" podUID="baa6a22a-e8e5-478f-b01f-46c7df7f927c"
	
	
	==> storage-provisioner [3d195b0de7f235633405024d713958e9a3b4bcd58081d77ca2ef5bd198bcaae0] <==
	W1101 09:48:39.527493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:41.530424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:41.534954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:43.538443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:43.542641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:45.546524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:45.553326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:47.556114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:47.560453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:49.563313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:49.567940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:51.572263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:51.576633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:53.579464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:53.586098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:55.589179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:55.593608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:57.596507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:57.600914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:59.604399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:48:59.611035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:49:01.614242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:49:01.620601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:49:03.624117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:49:03.628924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [600497321df3ccf06770ffaccae2152a17a2a3f6811674c7bc446b638b96cffe] <==
	I1101 09:37:57.252028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:37:57.264931       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:37:57.265056       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:37:57.267100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:38:00.721792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:38:04.987028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:38:08.585609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:38:11.639852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-034342 -n functional-034342
helpers_test.go:269: (dbg) Run:  kubectl --context functional-034342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-bhdwr hello-node-connect-7d85dfc575-9n7nd
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-034342 describe pod hello-node-75c85bcc94-bhdwr hello-node-connect-7d85dfc575-9n7nd
helpers_test.go:290: (dbg) kubectl --context functional-034342 describe pod hello-node-75c85bcc94-bhdwr hello-node-connect-7d85dfc575-9n7nd:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-bhdwr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-034342/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:39:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hkgv6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hkgv6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m45s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bhdwr to functional-034342
	  Normal   Pulling    6m44s (x5 over 9m46s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m44s (x5 over 9m46s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m44s (x5 over 9m46s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m41s (x20 over 9m45s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m27s (x21 over 9m45s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-9n7nd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-034342/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:39:01 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-474qt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-474qt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9n7nd to functional-034342
	  Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m47s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x41 over 10m)     kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-034342 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-034342 expose deployment hello-node --type=NodePort --port=8080
E1101 09:39:18.582743  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bhdwr" [5a8d2ab5-b959-4fd7-aab2-2f1c817fe403] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1101 09:41:34.714563  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:42:02.424915  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:46:34.715063  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-034342 -n functional-034342
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-01 09:49:19.148973573 +0000 UTC m=+1249.680630338
functional_test.go:1460: (dbg) Run:  kubectl --context functional-034342 describe po hello-node-75c85bcc94-bhdwr -n default
functional_test.go:1460: (dbg) kubectl --context functional-034342 describe po hello-node-75c85bcc94-bhdwr -n default:
Name:             hello-node-75c85bcc94-bhdwr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-034342/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:39:18 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hkgv6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hkgv6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bhdwr to functional-034342
Normal   Pulling    6m59s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m59s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-034342 logs hello-node-75c85bcc94-bhdwr -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-034342 logs hello-node-75c85bcc94-bhdwr -n default: exit status 1 (118.374155ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-bhdwr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-034342 logs hello-node-75c85bcc94-bhdwr -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 service --namespace=default --https --url hello-node: exit status 115 (527.893887ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30683
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-034342 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 service hello-node --url --format={{.IP}}: exit status 115 (402.588846ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-034342 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 service hello-node --url: exit status 115 (412.245264ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30683
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-034342 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30683
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image load --daemon kicbase/echo-server:functional-034342 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-034342 image load --daemon kicbase/echo-server:functional-034342 --alsologtostderr: (1.521945864s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-034342" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image load --daemon kicbase/echo-server:functional-034342 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-034342 image load --daemon kicbase/echo-server:functional-034342 --alsologtostderr: (1.198387904s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-034342" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-034342
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image load --daemon kicbase/echo-server:functional-034342 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-034342" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image save kicbase/echo-server:functional-034342 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
2025/11/01 09:49:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1101 09:49:32.416480  315138 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:32.416683  315138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:32.416697  315138 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:32.416702  315138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:32.416986  315138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:49:32.417620  315138 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:49:32.417791  315138 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:49:32.418302  315138 cli_runner.go:164] Run: docker container inspect functional-034342 --format={{.State.Status}}
	I1101 09:49:32.437808  315138 ssh_runner.go:195] Run: systemctl --version
	I1101 09:49:32.437880  315138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
	I1101 09:49:32.464349  315138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
	I1101 09:49:32.572379  315138 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1101 09:49:32.572450  315138 cache_images.go:255] Failed to load cached images for "functional-034342": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1101 09:49:32.572477  315138 cache_images.go:267] failed pushing to: functional-034342

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-034342
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image save --daemon kicbase/echo-server:functional-034342 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-034342
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-034342: exit status 1 (20.33181ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-034342

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-034342

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (489.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1101 09:58:51.967142  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:59:19.668358  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:01:34.714600  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:03:51.962611  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-832582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (7m49.994515679s)

                                                
                                                
-- stdout --
	* [ha-832582] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-832582" primary control-plane node in "ha-832582" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-832582-m02" control-plane node in "ha-832582" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:58:02.918042  342768 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:58:02.918211  342768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.918243  342768 out.go:374] Setting ErrFile to fd 2...
	I1101 09:58:02.918263  342768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.918533  342768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:58:02.918914  342768 out.go:368] Setting JSON to false
	I1101 09:58:02.919786  342768 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6032,"bootTime":1761985051,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:58:02.919890  342768 start.go:143] virtualization:  
	I1101 09:58:02.923079  342768 out.go:179] * [ha-832582] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:58:02.926767  342768 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:58:02.926822  342768 notify.go:221] Checking for updates...
	I1101 09:58:02.932590  342768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:58:02.935541  342768 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:02.938382  342768 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:58:02.941196  342768 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:58:02.944021  342768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:58:02.947258  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:02.947826  342768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:58:02.981516  342768 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:58:02.981632  342768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:58:03.054383  342768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:58:03.04442767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:58:03.054505  342768 docker.go:319] overlay module found
	I1101 09:58:03.057603  342768 out.go:179] * Using the docker driver based on existing profile
	I1101 09:58:03.060439  342768 start.go:309] selected driver: docker
	I1101 09:58:03.060472  342768 start.go:930] validating driver "docker" against &{Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:03.060601  342768 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:58:03.060705  342768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:58:03.115910  342768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:58:03.107176811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:58:03.116329  342768 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:58:03.116359  342768 cni.go:84] Creating CNI manager for ""
	I1101 09:58:03.116411  342768 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1101 09:58:03.116461  342768 start.go:353] cluster config:
	{Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:03.119656  342768 out.go:179] * Starting "ha-832582" primary control-plane node in "ha-832582" cluster
	I1101 09:58:03.122400  342768 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:58:03.125294  342768 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:58:03.128178  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:03.128237  342768 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:58:03.128250  342768 cache.go:59] Caching tarball of preloaded images
	I1101 09:58:03.128253  342768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:58:03.128348  342768 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:58:03.128359  342768 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:58:03.128499  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:03.147945  342768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:58:03.147967  342768 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:58:03.147995  342768 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:58:03.148022  342768 start.go:360] acquireMachinesLock for ha-832582: {Name:mk797b578da0c53fbacfede5c9484035101b2ded Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:58:03.148089  342768 start.go:364] duration metric: took 45.35µs to acquireMachinesLock for "ha-832582"
	I1101 09:58:03.148111  342768 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:58:03.148119  342768 fix.go:54] fixHost starting: 
	I1101 09:58:03.148373  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:03.165181  342768 fix.go:112] recreateIfNeeded on ha-832582: state=Stopped err=<nil>
	W1101 09:58:03.165215  342768 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:58:03.168512  342768 out.go:252] * Restarting existing docker container for "ha-832582" ...
	I1101 09:58:03.168595  342768 cli_runner.go:164] Run: docker start ha-832582
	I1101 09:58:03.407252  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:03.433226  342768 kic.go:430] container "ha-832582" state is running.
	I1101 09:58:03.433643  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:03.456608  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:03.456845  342768 machine.go:94] provisionDockerMachine start ...
	I1101 09:58:03.456903  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:03.480040  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:03.480367  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:03.480376  342768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:58:03.480952  342768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60776->127.0.0.1:33199: read: connection reset by peer
	I1101 09:58:06.633155  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582
	
	I1101 09:58:06.633179  342768 ubuntu.go:182] provisioning hostname "ha-832582"
	I1101 09:58:06.633238  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:06.651044  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:06.651360  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:06.651374  342768 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-832582 && echo "ha-832582" | sudo tee /etc/hostname
	I1101 09:58:06.812426  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582
	
	I1101 09:58:06.812507  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:06.832800  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:06.833109  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:06.833135  342768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832582' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832582/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832582' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:58:06.978124  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:58:06.978162  342768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:58:06.978183  342768 ubuntu.go:190] setting up certificates
	I1101 09:58:06.978200  342768 provision.go:84] configureAuth start
	I1101 09:58:06.978265  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:06.995491  342768 provision.go:143] copyHostCerts
	I1101 09:58:06.995536  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:06.995574  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:58:06.995588  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:06.995674  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:58:06.995773  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:06.995796  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:58:06.995810  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:06.995841  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:58:06.995930  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:06.995952  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:58:06.995964  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:06.995990  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:58:06.996061  342768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.ha-832582 san=[127.0.0.1 192.168.49.2 ha-832582 localhost minikube]
	I1101 09:58:07.519067  342768 provision.go:177] copyRemoteCerts
	I1101 09:58:07.519138  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:58:07.519200  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:07.536957  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:07.642333  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:58:07.642391  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:58:07.660960  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:58:07.661018  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:58:07.677785  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:58:07.677843  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1101 09:58:07.694547  342768 provision.go:87] duration metric: took 716.319917ms to configureAuth
	I1101 09:58:07.694583  342768 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:58:07.694801  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:07.694909  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:07.712779  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:07.713093  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:07.713114  342768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:58:08.052242  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:58:08.052306  342768 machine.go:97] duration metric: took 4.595450733s to provisionDockerMachine
	I1101 09:58:08.052334  342768 start.go:293] postStartSetup for "ha-832582" (driver="docker")
	I1101 09:58:08.052361  342768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:58:08.052459  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:58:08.052536  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.073358  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.177812  342768 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:58:08.181279  342768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:58:08.181304  342768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:58:08.181314  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:58:08.181367  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:58:08.181443  342768 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:58:08.181461  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 09:58:08.181557  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:58:08.189009  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:08.205960  342768 start.go:296] duration metric: took 153.59516ms for postStartSetup
	I1101 09:58:08.206069  342768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:58:08.206130  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.222745  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.322878  342768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:58:08.327536  342768 fix.go:56] duration metric: took 5.179409798s for fixHost
	I1101 09:58:08.327559  342768 start.go:83] releasing machines lock for "ha-832582", held for 5.179459334s
	I1101 09:58:08.327648  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:08.343793  342768 ssh_runner.go:195] Run: cat /version.json
	I1101 09:58:08.343844  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.344088  342768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:58:08.344140  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.362917  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.364182  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.559877  342768 ssh_runner.go:195] Run: systemctl --version
	I1101 09:58:08.566123  342768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:58:08.601278  342768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:58:08.606120  342768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:58:08.606226  342768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:58:08.613618  342768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:58:08.613639  342768 start.go:496] detecting cgroup driver to use...
	I1101 09:58:08.613670  342768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:58:08.613775  342768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:58:08.628944  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:58:08.641906  342768 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:58:08.641985  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:58:08.657234  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:58:08.670311  342768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:58:08.776949  342768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:58:08.895687  342768 docker.go:234] disabling docker service ...
	I1101 09:58:08.895763  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:58:08.912227  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:58:08.924716  342768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:58:09.033164  342768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:58:09.152553  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:58:09.165610  342768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:58:09.180758  342768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:58:09.180842  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.190144  342768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:58:09.190223  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.199488  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.208470  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.217564  342768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:58:09.226234  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.235095  342768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.243429  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.252434  342768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:58:09.260020  342768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:58:09.267457  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:09.373363  342768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:58:09.495940  342768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:58:09.496021  342768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:58:09.499937  342768 start.go:564] Will wait 60s for crictl version
	I1101 09:58:09.500082  342768 ssh_runner.go:195] Run: which crictl
	I1101 09:58:09.503791  342768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:58:09.533304  342768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:58:09.533395  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:58:09.560842  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:58:09.595644  342768 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:58:09.598486  342768 cli_runner.go:164] Run: docker network inspect ha-832582 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:58:09.614798  342768 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:58:09.618883  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:58:09.629569  342768 kubeadm.go:884] updating cluster {Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:58:09.629840  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:09.629912  342768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:58:09.667936  342768 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:58:09.667962  342768 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:58:09.668023  342768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:58:09.693223  342768 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:58:09.693250  342768 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:58:09.693259  342768 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:58:09.693353  342768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-832582 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:58:09.693438  342768 ssh_runner.go:195] Run: crio config
	I1101 09:58:09.751790  342768 cni.go:84] Creating CNI manager for ""
	I1101 09:58:09.751814  342768 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1101 09:58:09.751834  342768 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:58:09.751876  342768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-832582 NodeName:ha-832582 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:58:09.752075  342768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-832582"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:58:09.752102  342768 kube-vip.go:115] generating kube-vip config ...
	I1101 09:58:09.752152  342768 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 09:58:09.764023  342768 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:58:09.764122  342768 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 09:58:09.764180  342768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:58:09.772107  342768 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:58:09.772242  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1101 09:58:09.779796  342768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1101 09:58:09.792458  342768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:58:09.805570  342768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1101 09:58:09.818435  342768 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 09:58:09.831753  342768 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 09:58:09.835442  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:58:09.845042  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:09.952431  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:58:09.969023  342768 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582 for IP: 192.168.49.2
	I1101 09:58:09.969056  342768 certs.go:195] generating shared ca certs ...
	I1101 09:58:09.969072  342768 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:09.969241  342768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:58:09.969294  342768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:58:09.969307  342768 certs.go:257] generating profile certs ...
	I1101 09:58:09.969413  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key
	I1101 09:58:09.969456  342768 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2
	I1101 09:58:09.969474  342768 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1101 09:58:10.972603  342768 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 ...
	I1101 09:58:10.972640  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2: {Name:mka954bd27ed170438bba591673547458d094ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:10.972825  342768 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2 ...
	I1101 09:58:10.972842  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2: {Name:mk1061e2154b96baf6cb0ecee80a8eda645c1f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:10.972926  342768 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt
	I1101 09:58:10.973062  342768 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key
	I1101 09:58:10.973204  342768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key
	I1101 09:58:10.973222  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:58:10.973238  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:58:10.973256  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:58:10.973273  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:58:10.973288  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:58:10.973300  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:58:10.973317  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:58:10.973327  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:58:10.973379  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:58:10.973412  342768 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:58:10.973425  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:58:10.973451  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:58:10.973476  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:58:10.973504  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:58:10.973552  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:10.973584  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:10.973600  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 09:58:10.973611  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 09:58:10.977021  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:58:11.008672  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:58:11.039364  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:58:11.065401  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:58:11.091095  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:58:11.131902  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:58:11.164406  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:58:11.198225  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:58:11.249652  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:58:11.275181  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:58:11.313024  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:58:11.348627  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:58:11.371097  342768 ssh_runner.go:195] Run: openssl version
	I1101 09:58:11.381650  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:58:11.392802  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.397197  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.397269  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.466322  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:58:11.480286  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:58:11.490726  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.498361  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.498428  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.561754  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:58:11.576548  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:58:11.591018  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.595330  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.595393  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.664138  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:58:11.673663  342768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:58:11.677777  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:58:11.749190  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:58:11.791873  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:58:11.837053  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:58:11.885168  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:58:11.930387  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:58:11.974056  342768 kubeadm.go:401] StartCluster: {Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:11.974182  342768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:58:11.974253  342768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:58:12.007321  342768 cri.go:89] found id: "63f97ad5786a65d9b80ca88d289828cdda4b430f39036c771011f4f9a81dca4f"
	I1101 09:58:12.007345  342768 cri.go:89] found id: "fefab62a504e911c9eccaa75d59925b8ef3f49ca7726398893bf175da792fbb1"
	I1101 09:58:12.007351  342768 cri.go:89] found id: "73f1aa406ac05ed7ecdeab51e324661bb9e43e2bfe78738957991c966790c739"
	I1101 09:58:12.007355  342768 cri.go:89] found id: "6fabe4bc435b38aabf3b295822c18d3e9ae184e4bd65e3255404be3ea71d8088"
	I1101 09:58:12.007358  342768 cri.go:89] found id: "e24f1c760a2388d6c3baebc8169ffcb0099781302a75e8088ffb7fe0f14abe54"
	I1101 09:58:12.007362  342768 cri.go:89] found id: ""
	I1101 09:58:12.007432  342768 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:58:12.020873  342768 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:58:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:58:12.020952  342768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:58:12.030528  342768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:58:12.030550  342768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:58:12.030601  342768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:58:12.038481  342768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:58:12.038883  342768 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-832582" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:12.038992  342768 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "ha-832582" cluster setting kubeconfig missing "ha-832582" context setting]
	I1101 09:58:12.039323  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.039866  342768 kapi.go:59] client config for ha-832582: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:58:12.040348  342768 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:58:12.040368  342768 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:58:12.040374  342768 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:58:12.040379  342768 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:58:12.040387  342768 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:58:12.040718  342768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:58:12.040811  342768 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1101 09:58:12.049163  342768 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1101 09:58:12.049190  342768 kubeadm.go:602] duration metric: took 18.632637ms to restartPrimaryControlPlane
	I1101 09:58:12.049201  342768 kubeadm.go:403] duration metric: took 75.155923ms to StartCluster
	I1101 09:58:12.049217  342768 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.049278  342768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:12.049947  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.050162  342768 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:58:12.050191  342768 start.go:242] waiting for startup goroutines ...
	I1101 09:58:12.050207  342768 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:58:12.050639  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:12.054885  342768 out.go:179] * Enabled addons: 
	I1101 09:58:12.057752  342768 addons.go:515] duration metric: took 7.532576ms for enable addons: enabled=[]
	I1101 09:58:12.057799  342768 start.go:247] waiting for cluster config update ...
	I1101 09:58:12.057809  342768 start.go:256] writing updated cluster config ...
	I1101 09:58:12.061028  342768 out.go:203] 
	I1101 09:58:12.064154  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:12.064273  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.067726  342768 out.go:179] * Starting "ha-832582-m02" control-plane node in "ha-832582" cluster
	I1101 09:58:12.070608  342768 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:58:12.073579  342768 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:58:12.076459  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:12.076487  342768 cache.go:59] Caching tarball of preloaded images
	I1101 09:58:12.076589  342768 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:58:12.076605  342768 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:58:12.076732  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.076948  342768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:58:12.105644  342768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:58:12.105664  342768 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:58:12.105677  342768 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:58:12.105715  342768 start.go:360] acquireMachinesLock for ha-832582-m02: {Name:mkf85ec55e1996c34472f8191eb83bcbd97a011b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:58:12.105766  342768 start.go:364] duration metric: took 35.365µs to acquireMachinesLock for "ha-832582-m02"
	I1101 09:58:12.105795  342768 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:58:12.105801  342768 fix.go:54] fixHost starting: m02
	I1101 09:58:12.106065  342768 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:12.131724  342768 fix.go:112] recreateIfNeeded on ha-832582-m02: state=Stopped err=<nil>
	W1101 09:58:12.131753  342768 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:58:12.135018  342768 out.go:252] * Restarting existing docker container for "ha-832582-m02" ...
	I1101 09:58:12.135097  342768 cli_runner.go:164] Run: docker start ha-832582-m02
	I1101 09:58:12.536520  342768 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:12.574712  342768 kic.go:430] container "ha-832582-m02" state is running.
	I1101 09:58:12.575112  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:12.618100  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.618407  342768 machine.go:94] provisionDockerMachine start ...
	I1101 09:58:12.618487  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:12.650389  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:12.650705  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:12.650715  342768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:58:12.651605  342768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 09:58:15.933915  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582-m02
	
	I1101 09:58:15.933941  342768 ubuntu.go:182] provisioning hostname "ha-832582-m02"
	I1101 09:58:15.934014  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:15.987460  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:15.987772  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:15.987789  342768 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-832582-m02 && echo "ha-832582-m02" | sudo tee /etc/hostname
	I1101 09:58:16.314408  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582-m02
	
	I1101 09:58:16.314487  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:16.343626  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:16.343927  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:16.343944  342768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832582-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832582-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832582-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:58:16.593142  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:58:16.593167  342768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:58:16.593184  342768 ubuntu.go:190] setting up certificates
	I1101 09:58:16.593195  342768 provision.go:84] configureAuth start
	I1101 09:58:16.593253  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:16.650326  342768 provision.go:143] copyHostCerts
	I1101 09:58:16.650367  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:16.650399  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:58:16.650411  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:16.650486  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:58:16.650567  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:16.650589  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:58:16.650600  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:16.650629  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:58:16.650674  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:16.650695  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:58:16.650703  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:16.650730  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:58:16.650781  342768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.ha-832582-m02 san=[127.0.0.1 192.168.49.3 ha-832582-m02 localhost minikube]
	I1101 09:58:16.783662  342768 provision.go:177] copyRemoteCerts
	I1101 09:58:16.783792  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:58:16.783869  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:16.825898  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:17.012062  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:58:17.012132  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:58:17.068319  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:58:17.068382  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:58:17.096494  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:58:17.096557  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:58:17.127552  342768 provision.go:87] duration metric: took 534.343053ms to configureAuth
	I1101 09:58:17.127579  342768 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:58:17.127812  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:17.127918  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.173337  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:17.173640  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:17.173660  342768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:58:17.742511  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:58:17.742535  342768 machine.go:97] duration metric: took 5.124117974s to provisionDockerMachine
	I1101 09:58:17.742546  342768 start.go:293] postStartSetup for "ha-832582-m02" (driver="docker")
	I1101 09:58:17.742557  342768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:58:17.742620  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:58:17.742669  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.776626  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:17.903612  342768 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:58:17.910004  342768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:58:17.910040  342768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:58:17.910051  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:58:17.910106  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:58:17.910182  342768 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:58:17.910189  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 09:58:17.910287  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:58:17.921230  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:17.949919  342768 start.go:296] duration metric: took 207.358478ms for postStartSetup
	I1101 09:58:17.949998  342768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:58:17.950043  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.975141  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.101002  342768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:58:18.109231  342768 fix.go:56] duration metric: took 6.003422355s for fixHost
	I1101 09:58:18.109298  342768 start.go:83] releasing machines lock for "ha-832582-m02", held for 6.003516649s
	I1101 09:58:18.109404  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:18.137736  342768 out.go:179] * Found network options:
	I1101 09:58:18.140766  342768 out.go:179]   - NO_PROXY=192.168.49.2
	W1101 09:58:18.143721  342768 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 09:58:18.143760  342768 proxy.go:120] fail to check proxy env: Error ip not in block
	I1101 09:58:18.143834  342768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:58:18.143887  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:18.144157  342768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:58:18.144209  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:18.176200  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.181012  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.454952  342768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:58:18.579173  342768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:58:18.579289  342768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:58:18.623083  342768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:58:18.623169  342768 start.go:496] detecting cgroup driver to use...
	I1101 09:58:18.623227  342768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:58:18.623296  342768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:58:18.686246  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:58:18.715168  342768 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:58:18.715306  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:58:18.776969  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:58:18.820029  342768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:58:19.203132  342768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:58:19.545263  342768 docker.go:234] disabling docker service ...
	I1101 09:58:19.545377  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:58:19.611975  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:58:19.661375  342768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:58:19.968591  342768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:58:20.322030  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:58:20.377246  342768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:58:20.428021  342768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:58:20.428136  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.448333  342768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:58:20.448440  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.494239  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.509954  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.531043  342768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:58:20.546562  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.575054  342768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.599209  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.627200  342768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:58:20.650938  342768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:58:20.674283  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:21.004512  342768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:59:51.327238  342768 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.322673918s)
	I1101 09:59:51.327311  342768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:59:51.327492  342768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:59:51.332862  342768 start.go:564] Will wait 60s for crictl version
	I1101 09:59:51.332922  342768 ssh_runner.go:195] Run: which crictl
	I1101 09:59:51.336719  342768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:59:51.365406  342768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:59:51.365490  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:59:51.395065  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:59:51.426575  342768 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:59:51.429610  342768 out.go:179]   - env NO_PROXY=192.168.49.2
	I1101 09:59:51.432670  342768 cli_runner.go:164] Run: docker network inspect ha-832582 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:59:51.449128  342768 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:59:51.452943  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:59:51.462372  342768 mustload.go:66] Loading cluster: ha-832582
	I1101 09:59:51.462608  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:59:51.462862  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:59:51.484169  342768 host.go:66] Checking if "ha-832582" exists ...
	I1101 09:59:51.484451  342768 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582 for IP: 192.168.49.3
	I1101 09:59:51.484466  342768 certs.go:195] generating shared ca certs ...
	I1101 09:59:51.484481  342768 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:59:51.484596  342768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:59:51.484637  342768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:59:51.484647  342768 certs.go:257] generating profile certs ...
	I1101 09:59:51.484720  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key
	I1101 09:59:51.484783  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.cfdf3314
	I1101 09:59:51.484827  342768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key
	I1101 09:59:51.484840  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:59:51.484853  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:59:51.484872  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:59:51.484886  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:59:51.484897  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:59:51.484912  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:59:51.484928  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:59:51.484939  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:59:51.485004  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:59:51.485035  342768 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:59:51.485049  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:59:51.485072  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:59:51.485099  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:59:51.485122  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:59:51.485167  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:59:51.485197  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 09:59:51.485216  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:51.485231  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 09:59:51.485289  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:59:51.505623  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:59:51.602013  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1101 09:59:51.606013  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1101 09:59:51.614285  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1101 09:59:51.617662  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1101 09:59:51.626190  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1101 09:59:51.629806  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1101 09:59:51.638050  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1101 09:59:51.641429  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1101 09:59:51.649504  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1101 09:59:51.653190  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1101 09:59:51.662675  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1101 09:59:51.666366  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1101 09:59:51.675666  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:59:51.694409  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:59:51.714284  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:59:51.733851  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:59:51.752947  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:59:51.773341  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:59:51.792083  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:59:51.810450  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:59:51.829646  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:59:51.849065  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:59:51.868827  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:59:51.891330  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1101 09:59:51.904911  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1101 09:59:51.918898  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1101 09:59:51.934197  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1101 09:59:51.948234  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1101 09:59:51.960997  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1101 09:59:51.975251  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1101 09:59:51.989442  342768 ssh_runner.go:195] Run: openssl version
	I1101 09:59:51.996139  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:59:52.006856  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.011576  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.011690  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.052830  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:59:52.061006  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:59:52.069890  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.074806  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.074872  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.121631  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:59:52.130945  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:59:52.140732  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.145152  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.145254  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.189261  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:59:52.197284  342768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:59:52.201018  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:59:52.244640  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:59:52.291107  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:59:52.333098  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:59:52.374947  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:59:52.416040  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:59:52.458067  342768 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1101 09:59:52.458177  342768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-832582-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:59:52.458207  342768 kube-vip.go:115] generating kube-vip config ...
	I1101 09:59:52.458257  342768 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 09:59:52.471027  342768 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:59:52.471117  342768 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 09:59:52.471214  342768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:59:52.479864  342768 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:59:52.479956  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1101 09:59:52.488040  342768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:59:52.502060  342768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:59:52.516164  342768 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 09:59:52.531779  342768 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 09:59:52.535746  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:59:52.545530  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:59:52.680054  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:59:52.695591  342768 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:59:52.696046  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:59:52.701457  342768 out.go:179] * Verifying Kubernetes components...
	I1101 09:59:52.704242  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:59:52.825960  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:59:52.841449  342768 kapi.go:59] client config for ha-832582: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1101 09:59:52.841519  342768 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1101 09:59:52.841815  342768 node_ready.go:35] waiting up to 6m0s for node "ha-832582-m02" to be "Ready" ...
	I1101 10:00:24.926942  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:00:24.927351  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1101 10:00:27.343326  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:29.843264  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:32.343360  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:34.843237  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:36.843314  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1101 10:01:43.899271  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:01:43.899642  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:55716->192.168.49.2:8443: read: connection reset by peer
	W1101 10:01:46.343035  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:48.842515  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:51.342428  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:53.843341  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:56.342335  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:58.343338  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:00.842815  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:02.843269  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:05.343114  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:07.343295  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:09.343359  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:11.843295  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1101 10:03:17.100795  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:03:17.101130  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37558->192.168.49.2:8443: read: connection reset by peer
	W1101 10:03:19.343251  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:21.843314  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:24.343238  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:26.842444  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:28.843273  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:31.343229  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:33.842318  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:35.842369  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:37.843231  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:39.843286  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:42.342431  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:44.842376  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:46.843230  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:49.343299  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:51.843196  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:54.342397  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:06.345951  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	W1101 10:04:16.346594  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	I1101 10:04:18.761391  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:04:18.761797  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:55754->192.168.49.2:8443: read: connection reset by peer
	W1101 10:04:20.842430  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:22.842572  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:24.843325  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:27.343297  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:29.842340  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:32.342396  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:34.343290  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:36.843297  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:39.342353  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:41.343002  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:43.842379  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:45.843287  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:48.343254  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:50.343337  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:52.842301  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:54.843202  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:57.343277  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:59.843343  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:01.843430  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:04.342377  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:06.343265  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:08.843265  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:11.342401  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:13.842472  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:15.843291  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:18.343216  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:20.343304  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:22.843202  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:25.342703  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:27.343208  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:29.842303  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:31.843204  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:34.342391  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:36.343286  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:38.842462  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:50.343480  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	W1101 10:05:52.842736  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": context deadline exceeded
	I1101 10:05:52.842774  342768 node_ready.go:38] duration metric: took 6m0.000936091s for node "ha-832582-m02" to be "Ready" ...
	I1101 10:05:52.846340  342768 out.go:203] 
	W1101 10:05:52.849403  342768 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1101 10:05:52.849424  342768 out.go:285] * 
	* 
	W1101 10:05:52.851598  342768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:05:52.854797  342768 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-832582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-832582
helpers_test.go:243: (dbg) docker inspect ha-832582:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737",
	        "Created": "2025-11-01T09:49:47.884718242Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:58:03.201179109Z",
	            "FinishedAt": "2025-11-01T09:58:02.458383811Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/hosts",
	        "LogPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737-json.log",
	        "Name": "/ha-832582",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-832582:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-832582",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737",
	                "LowerDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-832582",
	                "Source": "/var/lib/docker/volumes/ha-832582/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-832582",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-832582",
	                "name.minikube.sigs.k8s.io": "ha-832582",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b1796f5bdac88308ffdad68dbe5a300087e1fdf42808f9a7bc9bb25df2947d",
	            "SandboxKey": "/var/run/docker/netns/f4b1796f5bda",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-832582": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:4b:56:fb:7f:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b4026c1b00639b2f23fdcf44b1c92a70df02212d3eadc8f713efc2420dc128ba",
	                    "EndpointID": "c45295fb0e9034fd21aa5c91972c347a41330627b88898fcda246b2b7e824074",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-832582",
	                        "e5a947146cd5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-832582 -n ha-832582
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-832582 -n ha-832582: exit status 2 (17.891641118s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-832582 cp ha-832582-m03:/home/docker/cp-test.txt ha-832582-m04:/home/docker/cp-test_ha-832582-m03_ha-832582-m04.txt               │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test_ha-832582-m03_ha-832582-m04.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp testdata/cp-test.txt ha-832582-m04:/home/docker/cp-test.txt                                                             │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1609765245/001/cp-test_ha-832582-m04.txt │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582:/home/docker/cp-test_ha-832582-m04_ha-832582.txt                       │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582.txt                                                 │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582-m02:/home/docker/cp-test_ha-832582-m04_ha-832582-m02.txt               │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m02 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582-m02.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582-m03:/home/docker/cp-test_ha-832582-m04_ha-832582-m03.txt               │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m03 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582-m03.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ node    │ ha-832582 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ node    │ ha-832582 node start m02 --alsologtostderr -v 5                                                                                      │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:55 UTC │
	│ node    │ ha-832582 node list --alsologtostderr -v 5                                                                                           │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │                     │
	│ stop    │ ha-832582 stop --alsologtostderr -v 5                                                                                                │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:55 UTC │
	│ start   │ ha-832582 start --wait true --alsologtostderr -v 5                                                                                   │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:57 UTC │
	│ node    │ ha-832582 node list --alsologtostderr -v 5                                                                                           │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ node    │ ha-832582 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ stop    │ ha-832582 stop --alsologtostderr -v 5                                                                                                │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:58 UTC │
	│ start   │ ha-832582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:58:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:58:02.918042  342768 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:58:02.918211  342768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.918243  342768 out.go:374] Setting ErrFile to fd 2...
	I1101 09:58:02.918263  342768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.918533  342768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:58:02.918914  342768 out.go:368] Setting JSON to false
	I1101 09:58:02.919786  342768 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6032,"bootTime":1761985051,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:58:02.919890  342768 start.go:143] virtualization:  
	I1101 09:58:02.923079  342768 out.go:179] * [ha-832582] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:58:02.926767  342768 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:58:02.926822  342768 notify.go:221] Checking for updates...
	I1101 09:58:02.932590  342768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:58:02.935541  342768 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:02.938382  342768 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:58:02.941196  342768 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:58:02.944021  342768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:58:02.947258  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:02.947826  342768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:58:02.981516  342768 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:58:02.981632  342768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:58:03.054383  342768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:58:03.04442767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:58:03.054505  342768 docker.go:319] overlay module found
	I1101 09:58:03.057603  342768 out.go:179] * Using the docker driver based on existing profile
	I1101 09:58:03.060439  342768 start.go:309] selected driver: docker
	I1101 09:58:03.060472  342768 start.go:930] validating driver "docker" against &{Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:03.060601  342768 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:58:03.060705  342768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:58:03.115910  342768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:58:03.107176811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:58:03.116329  342768 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:58:03.116359  342768 cni.go:84] Creating CNI manager for ""
	I1101 09:58:03.116411  342768 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1101 09:58:03.116461  342768 start.go:353] cluster config:
	{Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:03.119656  342768 out.go:179] * Starting "ha-832582" primary control-plane node in "ha-832582" cluster
	I1101 09:58:03.122400  342768 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:58:03.125294  342768 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:58:03.128178  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:03.128237  342768 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:58:03.128250  342768 cache.go:59] Caching tarball of preloaded images
	I1101 09:58:03.128253  342768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:58:03.128348  342768 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:58:03.128359  342768 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:58:03.128499  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:03.147945  342768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:58:03.147967  342768 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:58:03.147995  342768 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:58:03.148022  342768 start.go:360] acquireMachinesLock for ha-832582: {Name:mk797b578da0c53fbacfede5c9484035101b2ded Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:58:03.148089  342768 start.go:364] duration metric: took 45.35µs to acquireMachinesLock for "ha-832582"
	I1101 09:58:03.148111  342768 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:58:03.148119  342768 fix.go:54] fixHost starting: 
	I1101 09:58:03.148373  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:03.165181  342768 fix.go:112] recreateIfNeeded on ha-832582: state=Stopped err=<nil>
	W1101 09:58:03.165215  342768 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:58:03.168512  342768 out.go:252] * Restarting existing docker container for "ha-832582" ...
	I1101 09:58:03.168595  342768 cli_runner.go:164] Run: docker start ha-832582
	I1101 09:58:03.407252  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:03.433226  342768 kic.go:430] container "ha-832582" state is running.
	I1101 09:58:03.433643  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:03.456608  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:03.456845  342768 machine.go:94] provisionDockerMachine start ...
	I1101 09:58:03.456903  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:03.480040  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:03.480367  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:03.480376  342768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:58:03.480952  342768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60776->127.0.0.1:33199: read: connection reset by peer
	I1101 09:58:06.633155  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582
	
	I1101 09:58:06.633179  342768 ubuntu.go:182] provisioning hostname "ha-832582"
	I1101 09:58:06.633238  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:06.651044  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:06.651360  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:06.651374  342768 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-832582 && echo "ha-832582" | sudo tee /etc/hostname
	I1101 09:58:06.812426  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582
	
	I1101 09:58:06.812507  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:06.832800  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:06.833109  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:06.833135  342768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832582' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832582/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832582' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:58:06.978124  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:58:06.978162  342768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:58:06.978183  342768 ubuntu.go:190] setting up certificates
	I1101 09:58:06.978200  342768 provision.go:84] configureAuth start
	I1101 09:58:06.978265  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:06.995491  342768 provision.go:143] copyHostCerts
	I1101 09:58:06.995536  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:06.995574  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:58:06.995588  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:06.995674  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:58:06.995773  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:06.995796  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:58:06.995810  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:06.995841  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:58:06.995930  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:06.995952  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:58:06.995964  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:06.995990  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:58:06.996061  342768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.ha-832582 san=[127.0.0.1 192.168.49.2 ha-832582 localhost minikube]
	I1101 09:58:07.519067  342768 provision.go:177] copyRemoteCerts
	I1101 09:58:07.519138  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:58:07.519200  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:07.536957  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:07.642333  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:58:07.642391  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:58:07.660960  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:58:07.661018  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:58:07.677785  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:58:07.677843  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1101 09:58:07.694547  342768 provision.go:87] duration metric: took 716.319917ms to configureAuth
	I1101 09:58:07.694583  342768 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:58:07.694801  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:07.694909  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:07.712779  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:07.713093  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:07.713114  342768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:58:08.052242  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:58:08.052306  342768 machine.go:97] duration metric: took 4.595450733s to provisionDockerMachine
	I1101 09:58:08.052334  342768 start.go:293] postStartSetup for "ha-832582" (driver="docker")
	I1101 09:58:08.052361  342768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:58:08.052459  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:58:08.052536  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.073358  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.177812  342768 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:58:08.181279  342768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:58:08.181304  342768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:58:08.181314  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:58:08.181367  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:58:08.181443  342768 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:58:08.181461  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 09:58:08.181557  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:58:08.189009  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:08.205960  342768 start.go:296] duration metric: took 153.59516ms for postStartSetup
	I1101 09:58:08.206069  342768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:58:08.206130  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.222745  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.322878  342768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:58:08.327536  342768 fix.go:56] duration metric: took 5.179409798s for fixHost
	I1101 09:58:08.327559  342768 start.go:83] releasing machines lock for "ha-832582", held for 5.179459334s
	I1101 09:58:08.327648  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:08.343793  342768 ssh_runner.go:195] Run: cat /version.json
	I1101 09:58:08.343844  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.344088  342768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:58:08.344140  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.362917  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.364182  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.559877  342768 ssh_runner.go:195] Run: systemctl --version
	I1101 09:58:08.566123  342768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:58:08.601278  342768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:58:08.606120  342768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:58:08.606226  342768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:58:08.613618  342768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:58:08.613639  342768 start.go:496] detecting cgroup driver to use...
	I1101 09:58:08.613670  342768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:58:08.613775  342768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:58:08.628944  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:58:08.641906  342768 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:58:08.641985  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:58:08.657234  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:58:08.670311  342768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:58:08.776949  342768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:58:08.895687  342768 docker.go:234] disabling docker service ...
	I1101 09:58:08.895763  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:58:08.912227  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:58:08.924716  342768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:58:09.033164  342768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:58:09.152553  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:58:09.165610  342768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:58:09.180758  342768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:58:09.180842  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.190144  342768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:58:09.190223  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.199488  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.208470  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.217564  342768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:58:09.226234  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.235095  342768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.243429  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.252434  342768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:58:09.260020  342768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:58:09.267457  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:09.373363  342768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:58:09.495940  342768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:58:09.496021  342768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:58:09.499937  342768 start.go:564] Will wait 60s for crictl version
	I1101 09:58:09.500082  342768 ssh_runner.go:195] Run: which crictl
	I1101 09:58:09.503791  342768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:58:09.533304  342768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:58:09.533395  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:58:09.560842  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:58:09.595644  342768 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:58:09.598486  342768 cli_runner.go:164] Run: docker network inspect ha-832582 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:58:09.614798  342768 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:58:09.618883  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:58:09.629569  342768 kubeadm.go:884] updating cluster {Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:58:09.629840  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:09.629912  342768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:58:09.667936  342768 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:58:09.667962  342768 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:58:09.668023  342768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:58:09.693223  342768 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:58:09.693250  342768 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:58:09.693259  342768 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:58:09.693353  342768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-832582 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:58:09.693438  342768 ssh_runner.go:195] Run: crio config
	I1101 09:58:09.751790  342768 cni.go:84] Creating CNI manager for ""
	I1101 09:58:09.751814  342768 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1101 09:58:09.751834  342768 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:58:09.751876  342768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-832582 NodeName:ha-832582 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:58:09.752075  342768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-832582"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:58:09.752102  342768 kube-vip.go:115] generating kube-vip config ...
	I1101 09:58:09.752152  342768 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 09:58:09.764023  342768 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:58:09.764122  342768 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 09:58:09.764180  342768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:58:09.772107  342768 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:58:09.772242  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1101 09:58:09.779796  342768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1101 09:58:09.792458  342768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:58:09.805570  342768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1101 09:58:09.818435  342768 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 09:58:09.831753  342768 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 09:58:09.835442  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:58:09.845042  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:09.952431  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:58:09.969023  342768 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582 for IP: 192.168.49.2
	I1101 09:58:09.969056  342768 certs.go:195] generating shared ca certs ...
	I1101 09:58:09.969072  342768 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:09.969241  342768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:58:09.969294  342768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:58:09.969307  342768 certs.go:257] generating profile certs ...
	I1101 09:58:09.969413  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key
	I1101 09:58:09.969456  342768 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2
	I1101 09:58:09.969474  342768 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1101 09:58:10.972603  342768 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 ...
	I1101 09:58:10.972640  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2: {Name:mka954bd27ed170438bba591673547458d094ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:10.972825  342768 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2 ...
	I1101 09:58:10.972842  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2: {Name:mk1061e2154b96baf6cb0ecee80a8eda645c1f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:10.972926  342768 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt
	I1101 09:58:10.973062  342768 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key
	I1101 09:58:10.973204  342768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key
	I1101 09:58:10.973222  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:58:10.973238  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:58:10.973256  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:58:10.973273  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:58:10.973288  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:58:10.973300  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:58:10.973317  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:58:10.973327  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:58:10.973379  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:58:10.973412  342768 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:58:10.973425  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:58:10.973451  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:58:10.973476  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:58:10.973504  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:58:10.973552  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:10.973584  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:10.973600  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 09:58:10.973611  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 09:58:10.977021  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:58:11.008672  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:58:11.039364  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:58:11.065401  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:58:11.091095  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:58:11.131902  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:58:11.164406  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:58:11.198225  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:58:11.249652  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:58:11.275181  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:58:11.313024  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:58:11.348627  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:58:11.371097  342768 ssh_runner.go:195] Run: openssl version
	I1101 09:58:11.381650  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:58:11.392802  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.397197  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.397269  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.466322  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:58:11.480286  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:58:11.490726  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.498361  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.498428  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.561754  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:58:11.576548  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:58:11.591018  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.595330  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.595393  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.664138  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:58:11.673663  342768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:58:11.677777  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:58:11.749190  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:58:11.791873  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:58:11.837053  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:58:11.885168  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:58:11.930387  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:58:11.974056  342768 kubeadm.go:401] StartCluster: {Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:11.974182  342768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:58:11.974253  342768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:58:12.007321  342768 cri.go:89] found id: "63f97ad5786a65d9b80ca88d289828cdda4b430f39036c771011f4f9a81dca4f"
	I1101 09:58:12.007345  342768 cri.go:89] found id: "fefab62a504e911c9eccaa75d59925b8ef3f49ca7726398893bf175da792fbb1"
	I1101 09:58:12.007351  342768 cri.go:89] found id: "73f1aa406ac05ed7ecdeab51e324661bb9e43e2bfe78738957991c966790c739"
	I1101 09:58:12.007355  342768 cri.go:89] found id: "6fabe4bc435b38aabf3b295822c18d3e9ae184e4bd65e3255404be3ea71d8088"
	I1101 09:58:12.007358  342768 cri.go:89] found id: "e24f1c760a2388d6c3baebc8169ffcb0099781302a75e8088ffb7fe0f14abe54"
	I1101 09:58:12.007362  342768 cri.go:89] found id: ""
	I1101 09:58:12.007432  342768 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:58:12.020873  342768 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:58:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:58:12.020952  342768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:58:12.030528  342768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:58:12.030550  342768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:58:12.030601  342768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:58:12.038481  342768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:58:12.038883  342768 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-832582" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:12.038992  342768 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "ha-832582" cluster setting kubeconfig missing "ha-832582" context setting]
	I1101 09:58:12.039323  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.039866  342768 kapi.go:59] client config for ha-832582: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:58:12.040348  342768 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:58:12.040368  342768 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:58:12.040374  342768 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:58:12.040379  342768 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:58:12.040387  342768 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:58:12.040718  342768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:58:12.040811  342768 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1101 09:58:12.049163  342768 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1101 09:58:12.049190  342768 kubeadm.go:602] duration metric: took 18.632637ms to restartPrimaryControlPlane
	I1101 09:58:12.049201  342768 kubeadm.go:403] duration metric: took 75.155923ms to StartCluster
	I1101 09:58:12.049217  342768 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.049278  342768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:12.049947  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.050162  342768 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:58:12.050191  342768 start.go:242] waiting for startup goroutines ...
	I1101 09:58:12.050207  342768 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:58:12.050639  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:12.054885  342768 out.go:179] * Enabled addons: 
	I1101 09:58:12.057752  342768 addons.go:515] duration metric: took 7.532576ms for enable addons: enabled=[]
	I1101 09:58:12.057799  342768 start.go:247] waiting for cluster config update ...
	I1101 09:58:12.057809  342768 start.go:256] writing updated cluster config ...
	I1101 09:58:12.061028  342768 out.go:203] 
	I1101 09:58:12.064154  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:12.064273  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.067726  342768 out.go:179] * Starting "ha-832582-m02" control-plane node in "ha-832582" cluster
	I1101 09:58:12.070608  342768 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:58:12.073579  342768 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:58:12.076459  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:12.076487  342768 cache.go:59] Caching tarball of preloaded images
	I1101 09:58:12.076589  342768 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:58:12.076605  342768 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:58:12.076732  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.076948  342768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:58:12.105644  342768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:58:12.105664  342768 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:58:12.105677  342768 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:58:12.105715  342768 start.go:360] acquireMachinesLock for ha-832582-m02: {Name:mkf85ec55e1996c34472f8191eb83bcbd97a011b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:58:12.105766  342768 start.go:364] duration metric: took 35.365µs to acquireMachinesLock for "ha-832582-m02"
	I1101 09:58:12.105795  342768 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:58:12.105801  342768 fix.go:54] fixHost starting: m02
	I1101 09:58:12.106065  342768 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:12.131724  342768 fix.go:112] recreateIfNeeded on ha-832582-m02: state=Stopped err=<nil>
	W1101 09:58:12.131753  342768 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:58:12.135018  342768 out.go:252] * Restarting existing docker container for "ha-832582-m02" ...
	I1101 09:58:12.135097  342768 cli_runner.go:164] Run: docker start ha-832582-m02
	I1101 09:58:12.536520  342768 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:12.574712  342768 kic.go:430] container "ha-832582-m02" state is running.
	I1101 09:58:12.575112  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:12.618100  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.618407  342768 machine.go:94] provisionDockerMachine start ...
	I1101 09:58:12.618487  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:12.650389  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:12.650705  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:12.650715  342768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:58:12.651605  342768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 09:58:15.933915  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582-m02
	
	I1101 09:58:15.933941  342768 ubuntu.go:182] provisioning hostname "ha-832582-m02"
	I1101 09:58:15.934014  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:15.987460  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:15.987772  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:15.987789  342768 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-832582-m02 && echo "ha-832582-m02" | sudo tee /etc/hostname
	I1101 09:58:16.314408  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582-m02
	
	I1101 09:58:16.314487  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:16.343626  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:16.343927  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:16.343944  342768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832582-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832582-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832582-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:58:16.593142  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:58:16.593167  342768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:58:16.593184  342768 ubuntu.go:190] setting up certificates
	I1101 09:58:16.593195  342768 provision.go:84] configureAuth start
	I1101 09:58:16.593253  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:16.650326  342768 provision.go:143] copyHostCerts
	I1101 09:58:16.650367  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:16.650399  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:58:16.650411  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:16.650486  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:58:16.650567  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:16.650589  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:58:16.650600  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:16.650629  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:58:16.650674  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:16.650695  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:58:16.650703  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:16.650730  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:58:16.650781  342768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.ha-832582-m02 san=[127.0.0.1 192.168.49.3 ha-832582-m02 localhost minikube]
	I1101 09:58:16.783662  342768 provision.go:177] copyRemoteCerts
	I1101 09:58:16.783792  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:58:16.783869  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:16.825898  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:17.012062  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:58:17.012132  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:58:17.068319  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:58:17.068382  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:58:17.096494  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:58:17.096557  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:58:17.127552  342768 provision.go:87] duration metric: took 534.343053ms to configureAuth
	I1101 09:58:17.127579  342768 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:58:17.127812  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:17.127918  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.173337  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:17.173640  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:17.173660  342768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:58:17.742511  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:58:17.742535  342768 machine.go:97] duration metric: took 5.124117974s to provisionDockerMachine
	I1101 09:58:17.742546  342768 start.go:293] postStartSetup for "ha-832582-m02" (driver="docker")
	I1101 09:58:17.742557  342768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:58:17.742620  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:58:17.742669  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.776626  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:17.903612  342768 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:58:17.910004  342768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:58:17.910040  342768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:58:17.910051  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:58:17.910106  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:58:17.910182  342768 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:58:17.910189  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 09:58:17.910287  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:58:17.921230  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:17.949919  342768 start.go:296] duration metric: took 207.358478ms for postStartSetup
	I1101 09:58:17.949998  342768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:58:17.950043  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.975141  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.101002  342768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:58:18.109231  342768 fix.go:56] duration metric: took 6.003422355s for fixHost
	I1101 09:58:18.109298  342768 start.go:83] releasing machines lock for "ha-832582-m02", held for 6.003516649s
	I1101 09:58:18.109404  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:18.137736  342768 out.go:179] * Found network options:
	I1101 09:58:18.140766  342768 out.go:179]   - NO_PROXY=192.168.49.2
	W1101 09:58:18.143721  342768 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 09:58:18.143760  342768 proxy.go:120] fail to check proxy env: Error ip not in block
	I1101 09:58:18.143834  342768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:58:18.143887  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:18.144157  342768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:58:18.144209  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:18.176200  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.181012  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.454952  342768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:58:18.579173  342768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:58:18.579289  342768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:58:18.623083  342768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:58:18.623169  342768 start.go:496] detecting cgroup driver to use...
	I1101 09:58:18.623227  342768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:58:18.623296  342768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:58:18.686246  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:58:18.715168  342768 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:58:18.715306  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:58:18.776969  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:58:18.820029  342768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:58:19.203132  342768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:58:19.545263  342768 docker.go:234] disabling docker service ...
	I1101 09:58:19.545377  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:58:19.611975  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:58:19.661375  342768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:58:19.968591  342768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:58:20.322030  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:58:20.377246  342768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:58:20.428021  342768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:58:20.428136  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.448333  342768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:58:20.448440  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.494239  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.509954  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.531043  342768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:58:20.546562  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.575054  342768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.599209  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.627200  342768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:58:20.650938  342768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:58:20.674283  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:21.004512  342768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:59:51.327238  342768 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.322673918s)
	I1101 09:59:51.327311  342768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:59:51.327492  342768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:59:51.332862  342768 start.go:564] Will wait 60s for crictl version
	I1101 09:59:51.332922  342768 ssh_runner.go:195] Run: which crictl
	I1101 09:59:51.336719  342768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:59:51.365406  342768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:59:51.365490  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:59:51.395065  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:59:51.426575  342768 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:59:51.429610  342768 out.go:179]   - env NO_PROXY=192.168.49.2
	I1101 09:59:51.432670  342768 cli_runner.go:164] Run: docker network inspect ha-832582 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:59:51.449128  342768 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:59:51.452943  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:59:51.462372  342768 mustload.go:66] Loading cluster: ha-832582
	I1101 09:59:51.462608  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:59:51.462862  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:59:51.484169  342768 host.go:66] Checking if "ha-832582" exists ...
	I1101 09:59:51.484451  342768 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582 for IP: 192.168.49.3
	I1101 09:59:51.484466  342768 certs.go:195] generating shared ca certs ...
	I1101 09:59:51.484481  342768 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:59:51.484596  342768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:59:51.484637  342768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:59:51.484647  342768 certs.go:257] generating profile certs ...
	I1101 09:59:51.484720  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key
	I1101 09:59:51.484783  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.cfdf3314
	I1101 09:59:51.484827  342768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key
	I1101 09:59:51.484840  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:59:51.484853  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:59:51.484872  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:59:51.484886  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:59:51.484897  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:59:51.484912  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:59:51.484928  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:59:51.484939  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:59:51.485004  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:59:51.485035  342768 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:59:51.485049  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:59:51.485072  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:59:51.485099  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:59:51.485122  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:59:51.485167  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:59:51.485197  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 09:59:51.485216  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:51.485231  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 09:59:51.485289  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:59:51.505623  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:59:51.602013  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1101 09:59:51.606013  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1101 09:59:51.614285  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1101 09:59:51.617662  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1101 09:59:51.626190  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1101 09:59:51.629806  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1101 09:59:51.638050  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1101 09:59:51.641429  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1101 09:59:51.649504  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1101 09:59:51.653190  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1101 09:59:51.662675  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1101 09:59:51.666366  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1101 09:59:51.675666  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:59:51.694409  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:59:51.714284  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:59:51.733851  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:59:51.752947  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:59:51.773341  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:59:51.792083  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:59:51.810450  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:59:51.829646  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:59:51.849065  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:59:51.868827  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:59:51.891330  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1101 09:59:51.904911  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1101 09:59:51.918898  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1101 09:59:51.934197  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1101 09:59:51.948234  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1101 09:59:51.960997  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1101 09:59:51.975251  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1101 09:59:51.989442  342768 ssh_runner.go:195] Run: openssl version
	I1101 09:59:51.996139  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:59:52.006856  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.011576  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.011690  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.052830  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:59:52.061006  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:59:52.069890  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.074806  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.074872  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.121631  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:59:52.130945  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:59:52.140732  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.145152  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.145254  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.189261  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:59:52.197284  342768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:59:52.201018  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:59:52.244640  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:59:52.291107  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:59:52.333098  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:59:52.374947  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:59:52.416040  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:59:52.458067  342768 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1101 09:59:52.458177  342768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-832582-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:59:52.458207  342768 kube-vip.go:115] generating kube-vip config ...
	I1101 09:59:52.458257  342768 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 09:59:52.471027  342768 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:59:52.471117  342768 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 09:59:52.471214  342768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:59:52.479864  342768 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:59:52.479956  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1101 09:59:52.488040  342768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:59:52.502060  342768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:59:52.516164  342768 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 09:59:52.531779  342768 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 09:59:52.535746  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:59:52.545530  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:59:52.680054  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:59:52.695591  342768 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:59:52.696046  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:59:52.701457  342768 out.go:179] * Verifying Kubernetes components...
	I1101 09:59:52.704242  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:59:52.825960  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:59:52.841449  342768 kapi.go:59] client config for ha-832582: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1101 09:59:52.841519  342768 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1101 09:59:52.841815  342768 node_ready.go:35] waiting up to 6m0s for node "ha-832582-m02" to be "Ready" ...
	I1101 10:00:24.926942  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:00:24.927351  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1101 10:00:27.343326  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:29.843264  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:32.343360  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:34.843237  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:36.843314  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1101 10:01:43.899271  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:01:43.899642  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:55716->192.168.49.2:8443: read: connection reset by peer
	W1101 10:01:46.343035  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:48.842515  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:51.342428  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:53.843341  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:56.342335  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:58.343338  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:00.842815  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:02.843269  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:05.343114  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:07.343295  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:09.343359  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:11.843295  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1101 10:03:17.100795  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:03:17.101130  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37558->192.168.49.2:8443: read: connection reset by peer
	W1101 10:03:19.343251  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:21.843314  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:24.343238  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:26.842444  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:28.843273  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:31.343229  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:33.842318  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:35.842369  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:37.843231  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:39.843286  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:42.342431  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:44.842376  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:46.843230  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:49.343299  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:51.843196  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:54.342397  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:06.345951  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	W1101 10:04:16.346594  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	I1101 10:04:18.761391  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:04:18.761797  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:55754->192.168.49.2:8443: read: connection reset by peer
	W1101 10:04:20.842430  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:22.842572  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:24.843325  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:27.343297  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:29.842340  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:32.342396  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:34.343290  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:36.843297  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:39.342353  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:41.343002  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:43.842379  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:45.843287  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:48.343254  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:50.343337  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:52.842301  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:54.843202  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:57.343277  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:59.843343  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:01.843430  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:04.342377  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:06.343265  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:08.843265  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:11.342401  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:13.842472  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:15.843291  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:18.343216  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:20.343304  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:22.843202  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:25.342703  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:27.343208  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:29.842303  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:31.843204  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:34.342391  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:36.343286  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:38.842462  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:50.343480  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	W1101 10:05:52.842736  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": context deadline exceeded
	I1101 10:05:52.842774  342768 node_ready.go:38] duration metric: took 6m0.000936091s for node "ha-832582-m02" to be "Ready" ...
	I1101 10:05:52.846340  342768 out.go:203] 
	W1101 10:05:52.849403  342768 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1101 10:05:52.849424  342768 out.go:285] * 
	W1101 10:05:52.851598  342768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:05:52.854797  342768 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.211892535Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6d81d35d-5e3a-4a0d-95c7-fd4ce3862a7b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.212989865Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=7d9342d8-5209-4633-ada8-79262e11ab03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.213090913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.218756833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.219359632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.239436241Z" level=info msg="Created container ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=7d9342d8-5209-4633-ada8-79262e11ab03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.240120305Z" level=info msg="Starting container: ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112" id=5059d73f-d026-48cc-ab1b-20755ae53f09 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.24397708Z" level=info msg="Started container" PID=1243 containerID=ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112 description=kube-system/kube-controller-manager-ha-832582/kube-controller-manager id=5059d73f-d026-48cc-ab1b-20755ae53f09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f8bb27411a46d477c2d6c99cd3320cc05020176d2346c660a30b294ab654fd6
	Nov 01 10:05:37 ha-832582 conmon[1241]: conmon ebb69e2d4cc0850778e8 <ninfo>: container 1243 exited with status 1
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.311325101Z" level=info msg="Removing container: 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.320548328Z" level=info msg="Error loading conmon cgroup of container 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289: cgroup deleted" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.3238441Z" level=info msg="Removed container 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.209911635Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3dba77e3-5193-4cb7-857b-77c03b8eec61 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.214760967Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d11bade9-75dd-4891-a3ac-8b6ec0818fea name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.217346599Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=ddd6e3be-671f-440e-8995-91a3f805c68e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.217457231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.222294082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.222766582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.241766495Z" level=info msg="Created container c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=ddd6e3be-671f-440e-8995-91a3f805c68e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.242395494Z" level=info msg="Starting container: c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961" id=75c58720-b050-4290-a4bd-8b44e55c7a3a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.245675357Z" level=info msg="Started container" PID=1257 containerID=c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961 description=kube-system/kube-apiserver-ha-832582/kube-apiserver id=75c58720-b050-4290-a4bd-8b44e55c7a3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=04c614211235f3aea840ff0ef3962ce76f51fc82f70daa74b0ed9c0b2a0f7f66
	Nov 01 10:06:00 ha-832582 conmon[1255]: conmon c883cef2aa1b7c987d02 <ninfo>: container 1257 exited with status 255
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.37364516Z" level=info msg="Removing container: 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.380903964Z" level=info msg="Error loading conmon cgroup of container 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b: cgroup deleted" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.383910222Z" level=info msg="Removed container 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	c883cef2aa1b7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   31 seconds ago      Exited              kube-apiserver            8                   04c614211235f       kube-apiserver-ha-832582            kube-system
	ebb69e2d4cc08       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   45 seconds ago      Exited              kube-controller-manager   9                   4f8bb27411a46       kube-controller-manager-ha-832582   kube-system
	e5bbf60599882       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago       Running             etcd                      3                   51ff665c16f3c       etcd-ha-832582                      kube-system
	fefab62a504e9       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  2                   adcb5b1f5a762       kube-vip-ha-832582                  kube-system
	6fabe4bc435b3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            2                   c588a4af8fecc       kube-scheduler-ha-832582            kube-system
	e24f1c760a238       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Exited              etcd                      2                   51ff665c16f3c       etcd-ha-832582                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014572] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501039] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033197] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753566] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.779214] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:03] hrtimer: interrupt took 8309137 ns
	[Nov 1 09:28] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[  +0.061702] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 1 09:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:36] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:50] overlayfs: idmapped layers are currently not supported
	[ +32.089424] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:55] overlayfs: idmapped layers are currently not supported
	[  +4.195210] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:58] overlayfs: idmapped layers are currently not supported
	[  +4.848874] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e24f1c760a2388d6c3baebc8169ffcb0099781302a75e8088ffb7fe0f14abe54] <==
	{"level":"info","ts":"2025-11-01T10:03:28.368864Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:03:28.368907Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"ha-832582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:03:28.368997Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:03:28.370564Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:03:28.370635Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370653Z","caller":"etcdserver/server.go:1272","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-11-01T10:03:28.370677Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:03:28.370679Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T10:03:28.370784Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370801Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370832Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:03:28.370842Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370825Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.370915Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370878Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370990Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:03:28.371010Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370965Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371030Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371047Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371056Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.374519Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T10:03:28.374595Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.374658Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T10:03:28.374686Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-832582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [e5bbf60599882a44b7077046577e6c6d255753632f3ad97ed0e3d65eb2697937] <==
	{"level":"info","ts":"2025-11-01T10:06:08.059029Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:08.059070Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:08.059086Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:08.359075Z","caller":"etcdserver/v3_server.go:923","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-11-01T10:06:08.359243Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.000587004s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-11-01T10:06:08.359300Z","caller":"traceutil/trace.go:172","msg":"trace[558471041] range","detail":"{range_begin:; range_end:; }","duration":"7.000660883s","start":"2025-11-01T10:06:01.358625Z","end":"2025-11-01T10:06:08.359286Z","steps":["trace[558471041] 'agreement among raft nodes before linearized reading'  (duration: 7.000584861s)"],"step_count":1}
	{"level":"error","ts":"2025-11-01T10:06:08.359370Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]non_learner ok\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2220\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2747\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3210\nnet/http.(*conn).serve\n\tnet/http/server.go:2092"}
	{"level":"warn","ts":"2025-11-01T10:06:08.599237Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3c3ae81873ee7e73","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T10:06:08.599378Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3c3ae81873ee7e73","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-01T10:06:09.159702Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:09.159850Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:09.159933Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:09.160079Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:09.160131Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:09.595205Z","caller":"etcdserver/server.go:1814","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-832582 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"info","ts":"2025-11-01T10:06:10.258687Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:10.258771Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:10.258800Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:10.258848Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:10.258869Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-11-01T10:06:11.358767Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:11.358838Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:11.358857Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:11.358903Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:11.358915Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	
	
	==> kernel <==
	 10:06:11 up  1:48,  0 user,  load average: 0.45, 0.90, 1.44
	Linux ha-832582 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961] <==
	I1101 10:05:40.306392       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1101 10:05:40.853033       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1101 10:05:40.853065       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1101 10:05:40.853075       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1101 10:05:40.853080       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1101 10:05:40.853085       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1101 10:05:40.853089       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1101 10:05:40.853093       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1101 10:05:40.853097       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1101 10:05:40.853101       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1101 10:05:40.853106       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1101 10:05:40.853110       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1101 10:05:40.853114       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1101 10:05:40.870762       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:05:40.872294       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1101 10:05:40.872930       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1101 10:05:40.879616       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:05:40.890179       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1101 10:05:40.890287       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1101 10:05:40.890929       1 instance.go:239] Using reconciler: lease
	W1101 10:05:40.892474       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:06:00.869430       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:06:00.872570       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W1101 10:06:00.892234       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1101 10:06:00.892232       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112] <==
	I1101 10:05:26.730710       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:05:27.221967       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1101 10:05:27.222053       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:05:27.223635       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 10:05:27.223814       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 10:05:27.224036       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1101 10:05:27.224086       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 10:05:37.225354       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [6fabe4bc435b38aabf3b295822c18d3e9ae184e4bd65e3255404be3ea71d8088] <==
	E1101 10:05:19.012837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:05:19.858968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:05:21.354493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:05:24.146896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:05:28.850014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:05:29.568563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:05:31.156997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:05:32.075760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:05:34.876970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:05:36.541398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:05:36.855814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:05:51.115287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:05:52.948469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:05:56.934437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:06:01.899981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50632->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:06:01.900101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50552->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:06:01.900186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50560->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:06:01.900279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50606->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:06:01.900365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50620->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:06:01.900449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50654->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:06:01.900469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50592->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:06:02.196959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:06:03.029583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:06:05.944882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:06:10.499860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	
	
	==> kubelet <==
	Nov 01 10:06:09 ha-832582 kubelet[802]: E1101 10:06:09.380776     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:09 ha-832582 kubelet[802]: E1101 10:06:09.482210     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:09 ha-832582 kubelet[802]: E1101 10:06:09.583719     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:09 ha-832582 kubelet[802]: E1101 10:06:09.685293     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:09 ha-832582 kubelet[802]: E1101 10:06:09.786448     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:09 ha-832582 kubelet[802]: E1101 10:06:09.887412     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:09 ha-832582 kubelet[802]: E1101 10:06:09.988574     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.089426     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.190250     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.259510     802 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-832582\" not found"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.291083     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.392164     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.492873     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.594337     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.695313     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.796166     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.897425     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:10 ha-832582 kubelet[802]: E1101 10:06:10.998429     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:11 ha-832582 kubelet[802]: E1101 10:06:11.099886     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:11 ha-832582 kubelet[802]: E1101 10:06:11.200694     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:11 ha-832582 kubelet[802]: E1101 10:06:11.301519     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:11 ha-832582 kubelet[802]: E1101 10:06:11.402676     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:11 ha-832582 kubelet[802]: E1101 10:06:11.503189     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:11 ha-832582 kubelet[802]: E1101 10:06:11.603635     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:11 ha-832582 kubelet[802]: E1101 10:06:11.704371     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-832582 -n ha-832582
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-832582 -n ha-832582: exit status 2 (331.943099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-832582" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (489.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-832582" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-832582\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-832582\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-832582\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-832582
helpers_test.go:243: (dbg) docker inspect ha-832582:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737",
	        "Created": "2025-11-01T09:49:47.884718242Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:58:03.201179109Z",
	            "FinishedAt": "2025-11-01T09:58:02.458383811Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/hosts",
	        "LogPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737-json.log",
	        "Name": "/ha-832582",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-832582:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-832582",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737",
	                "LowerDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-832582",
	                "Source": "/var/lib/docker/volumes/ha-832582/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-832582",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-832582",
	                "name.minikube.sigs.k8s.io": "ha-832582",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b1796f5bdac88308ffdad68dbe5a300087e1fdf42808f9a7bc9bb25df2947d",
	            "SandboxKey": "/var/run/docker/netns/f4b1796f5bda",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-832582": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:4b:56:fb:7f:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b4026c1b00639b2f23fdcf44b1c92a70df02212d3eadc8f713efc2420dc128ba",
	                    "EndpointID": "c45295fb0e9034fd21aa5c91972c347a41330627b88898fcda246b2b7e824074",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-832582",
	                        "e5a947146cd5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-832582 -n ha-832582
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-832582 -n ha-832582: exit status 2 (333.901282ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-832582 cp ha-832582-m03:/home/docker/cp-test.txt ha-832582-m04:/home/docker/cp-test_ha-832582-m03_ha-832582-m04.txt               │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test_ha-832582-m03_ha-832582-m04.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp testdata/cp-test.txt ha-832582-m04:/home/docker/cp-test.txt                                                             │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1609765245/001/cp-test_ha-832582-m04.txt │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582:/home/docker/cp-test_ha-832582-m04_ha-832582.txt                       │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582.txt                                                 │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582-m02:/home/docker/cp-test_ha-832582-m04_ha-832582-m02.txt               │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m02 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582-m02.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582-m03:/home/docker/cp-test_ha-832582-m04_ha-832582-m03.txt               │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m03 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582-m03.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ node    │ ha-832582 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ node    │ ha-832582 node start m02 --alsologtostderr -v 5                                                                                      │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:55 UTC │
	│ node    │ ha-832582 node list --alsologtostderr -v 5                                                                                           │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │                     │
	│ stop    │ ha-832582 stop --alsologtostderr -v 5                                                                                                │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:55 UTC │
	│ start   │ ha-832582 start --wait true --alsologtostderr -v 5                                                                                   │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:57 UTC │
	│ node    │ ha-832582 node list --alsologtostderr -v 5                                                                                           │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ node    │ ha-832582 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ stop    │ ha-832582 stop --alsologtostderr -v 5                                                                                                │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:58 UTC │
	│ start   │ ha-832582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:58:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:58:02.918042  342768 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:58:02.918211  342768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.918243  342768 out.go:374] Setting ErrFile to fd 2...
	I1101 09:58:02.918263  342768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.918533  342768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:58:02.918914  342768 out.go:368] Setting JSON to false
	I1101 09:58:02.919786  342768 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6032,"bootTime":1761985051,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:58:02.919890  342768 start.go:143] virtualization:  
	I1101 09:58:02.923079  342768 out.go:179] * [ha-832582] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:58:02.926767  342768 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:58:02.926822  342768 notify.go:221] Checking for updates...
	I1101 09:58:02.932590  342768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:58:02.935541  342768 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:02.938382  342768 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:58:02.941196  342768 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:58:02.944021  342768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:58:02.947258  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:02.947826  342768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:58:02.981516  342768 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:58:02.981632  342768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:58:03.054383  342768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:58:03.04442767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:58:03.054505  342768 docker.go:319] overlay module found
	I1101 09:58:03.057603  342768 out.go:179] * Using the docker driver based on existing profile
	I1101 09:58:03.060439  342768 start.go:309] selected driver: docker
	I1101 09:58:03.060472  342768 start.go:930] validating driver "docker" against &{Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:03.060601  342768 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:58:03.060705  342768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:58:03.115910  342768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:58:03.107176811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:58:03.116329  342768 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:58:03.116359  342768 cni.go:84] Creating CNI manager for ""
	I1101 09:58:03.116411  342768 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1101 09:58:03.116461  342768 start.go:353] cluster config:
	{Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:03.119656  342768 out.go:179] * Starting "ha-832582" primary control-plane node in "ha-832582" cluster
	I1101 09:58:03.122400  342768 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:58:03.125294  342768 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:58:03.128178  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:03.128237  342768 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:58:03.128250  342768 cache.go:59] Caching tarball of preloaded images
	I1101 09:58:03.128253  342768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:58:03.128348  342768 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:58:03.128359  342768 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:58:03.128499  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:03.147945  342768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:58:03.147967  342768 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:58:03.147995  342768 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:58:03.148022  342768 start.go:360] acquireMachinesLock for ha-832582: {Name:mk797b578da0c53fbacfede5c9484035101b2ded Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:58:03.148089  342768 start.go:364] duration metric: took 45.35µs to acquireMachinesLock for "ha-832582"
	I1101 09:58:03.148111  342768 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:58:03.148119  342768 fix.go:54] fixHost starting: 
	I1101 09:58:03.148373  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:03.165181  342768 fix.go:112] recreateIfNeeded on ha-832582: state=Stopped err=<nil>
	W1101 09:58:03.165215  342768 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:58:03.168512  342768 out.go:252] * Restarting existing docker container for "ha-832582" ...
	I1101 09:58:03.168595  342768 cli_runner.go:164] Run: docker start ha-832582
	I1101 09:58:03.407252  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:03.433226  342768 kic.go:430] container "ha-832582" state is running.
	I1101 09:58:03.433643  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:03.456608  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:03.456845  342768 machine.go:94] provisionDockerMachine start ...
	I1101 09:58:03.456903  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:03.480040  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:03.480367  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:03.480376  342768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:58:03.480952  342768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60776->127.0.0.1:33199: read: connection reset by peer
	I1101 09:58:06.633155  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582
	
	I1101 09:58:06.633179  342768 ubuntu.go:182] provisioning hostname "ha-832582"
	I1101 09:58:06.633238  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:06.651044  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:06.651360  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:06.651374  342768 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-832582 && echo "ha-832582" | sudo tee /etc/hostname
	I1101 09:58:06.812426  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582
	
	I1101 09:58:06.812507  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:06.832800  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:06.833109  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:06.833135  342768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832582' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832582/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832582' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:58:06.978124  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:58:06.978162  342768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:58:06.978183  342768 ubuntu.go:190] setting up certificates
	I1101 09:58:06.978200  342768 provision.go:84] configureAuth start
	I1101 09:58:06.978265  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:06.995491  342768 provision.go:143] copyHostCerts
	I1101 09:58:06.995536  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:06.995574  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:58:06.995588  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:06.995674  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:58:06.995773  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:06.995796  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:58:06.995810  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:06.995841  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:58:06.995930  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:06.995952  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:58:06.995964  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:06.995990  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:58:06.996061  342768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.ha-832582 san=[127.0.0.1 192.168.49.2 ha-832582 localhost minikube]
	I1101 09:58:07.519067  342768 provision.go:177] copyRemoteCerts
	I1101 09:58:07.519138  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:58:07.519200  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:07.536957  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:07.642333  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:58:07.642391  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:58:07.660960  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:58:07.661018  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:58:07.677785  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:58:07.677843  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1101 09:58:07.694547  342768 provision.go:87] duration metric: took 716.319917ms to configureAuth
	I1101 09:58:07.694583  342768 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:58:07.694801  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:07.694909  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:07.712779  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:07.713093  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:07.713114  342768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:58:08.052242  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:58:08.052306  342768 machine.go:97] duration metric: took 4.595450733s to provisionDockerMachine
	I1101 09:58:08.052334  342768 start.go:293] postStartSetup for "ha-832582" (driver="docker")
	I1101 09:58:08.052361  342768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:58:08.052459  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:58:08.052536  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.073358  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.177812  342768 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:58:08.181279  342768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:58:08.181304  342768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:58:08.181314  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:58:08.181367  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:58:08.181443  342768 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:58:08.181461  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 09:58:08.181557  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:58:08.189009  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:08.205960  342768 start.go:296] duration metric: took 153.59516ms for postStartSetup
	I1101 09:58:08.206069  342768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:58:08.206130  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.222745  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.322878  342768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:58:08.327536  342768 fix.go:56] duration metric: took 5.179409798s for fixHost
	I1101 09:58:08.327559  342768 start.go:83] releasing machines lock for "ha-832582", held for 5.179459334s
	I1101 09:58:08.327648  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:08.343793  342768 ssh_runner.go:195] Run: cat /version.json
	I1101 09:58:08.343844  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.344088  342768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:58:08.344140  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.362917  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.364182  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.559877  342768 ssh_runner.go:195] Run: systemctl --version
	I1101 09:58:08.566123  342768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:58:08.601278  342768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:58:08.606120  342768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:58:08.606226  342768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:58:08.613618  342768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:58:08.613639  342768 start.go:496] detecting cgroup driver to use...
	I1101 09:58:08.613670  342768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:58:08.613775  342768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:58:08.628944  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:58:08.641906  342768 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:58:08.641985  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:58:08.657234  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:58:08.670311  342768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:58:08.776949  342768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:58:08.895687  342768 docker.go:234] disabling docker service ...
	I1101 09:58:08.895763  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:58:08.912227  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:58:08.924716  342768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:58:09.033164  342768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:58:09.152553  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:58:09.165610  342768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:58:09.180758  342768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:58:09.180842  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.190144  342768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:58:09.190223  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.199488  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.208470  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.217564  342768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:58:09.226234  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.235095  342768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.243429  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.252434  342768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:58:09.260020  342768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:58:09.267457  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:09.373363  342768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:58:09.495940  342768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:58:09.496021  342768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:58:09.499937  342768 start.go:564] Will wait 60s for crictl version
	I1101 09:58:09.500082  342768 ssh_runner.go:195] Run: which crictl
	I1101 09:58:09.503791  342768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:58:09.533304  342768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:58:09.533395  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:58:09.560842  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:58:09.595644  342768 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:58:09.598486  342768 cli_runner.go:164] Run: docker network inspect ha-832582 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:58:09.614798  342768 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:58:09.618883  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:58:09.629569  342768 kubeadm.go:884] updating cluster {Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:58:09.629840  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:09.629912  342768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:58:09.667936  342768 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:58:09.667962  342768 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:58:09.668023  342768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:58:09.693223  342768 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:58:09.693250  342768 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:58:09.693259  342768 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:58:09.693353  342768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-832582 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:58:09.693438  342768 ssh_runner.go:195] Run: crio config
	I1101 09:58:09.751790  342768 cni.go:84] Creating CNI manager for ""
	I1101 09:58:09.751814  342768 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1101 09:58:09.751834  342768 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:58:09.751876  342768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-832582 NodeName:ha-832582 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:58:09.752075  342768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-832582"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:58:09.752102  342768 kube-vip.go:115] generating kube-vip config ...
	I1101 09:58:09.752152  342768 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 09:58:09.764023  342768 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:58:09.764122  342768 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 09:58:09.764180  342768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:58:09.772107  342768 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:58:09.772242  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1101 09:58:09.779796  342768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1101 09:58:09.792458  342768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:58:09.805570  342768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1101 09:58:09.818435  342768 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 09:58:09.831753  342768 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 09:58:09.835442  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:58:09.845042  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:09.952431  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:58:09.969023  342768 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582 for IP: 192.168.49.2
	I1101 09:58:09.969056  342768 certs.go:195] generating shared ca certs ...
	I1101 09:58:09.969072  342768 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:09.969241  342768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:58:09.969294  342768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:58:09.969307  342768 certs.go:257] generating profile certs ...
	I1101 09:58:09.969413  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key
	I1101 09:58:09.969456  342768 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2
	I1101 09:58:09.969474  342768 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1101 09:58:10.972603  342768 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 ...
	I1101 09:58:10.972640  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2: {Name:mka954bd27ed170438bba591673547458d094ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:10.972825  342768 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2 ...
	I1101 09:58:10.972842  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2: {Name:mk1061e2154b96baf6cb0ecee80a8eda645c1f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:10.972926  342768 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt
	I1101 09:58:10.973062  342768 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key
	I1101 09:58:10.973204  342768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key
	I1101 09:58:10.973222  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:58:10.973238  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:58:10.973256  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:58:10.973273  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:58:10.973288  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:58:10.973300  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:58:10.973317  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:58:10.973327  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:58:10.973379  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:58:10.973412  342768 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:58:10.973425  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:58:10.973451  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:58:10.973476  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:58:10.973504  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:58:10.973552  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:10.973584  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:10.973600  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 09:58:10.973611  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 09:58:10.977021  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:58:11.008672  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:58:11.039364  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:58:11.065401  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:58:11.091095  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:58:11.131902  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:58:11.164406  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:58:11.198225  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:58:11.249652  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:58:11.275181  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:58:11.313024  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:58:11.348627  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:58:11.371097  342768 ssh_runner.go:195] Run: openssl version
	I1101 09:58:11.381650  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:58:11.392802  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.397197  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.397269  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.466322  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:58:11.480286  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:58:11.490726  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.498361  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.498428  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.561754  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:58:11.576548  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:58:11.591018  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.595330  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.595393  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.664138  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:58:11.673663  342768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:58:11.677777  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:58:11.749190  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:58:11.791873  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:58:11.837053  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:58:11.885168  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:58:11.930387  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:58:11.974056  342768 kubeadm.go:401] StartCluster: {Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:11.974182  342768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:58:11.974253  342768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:58:12.007321  342768 cri.go:89] found id: "63f97ad5786a65d9b80ca88d289828cdda4b430f39036c771011f4f9a81dca4f"
	I1101 09:58:12.007345  342768 cri.go:89] found id: "fefab62a504e911c9eccaa75d59925b8ef3f49ca7726398893bf175da792fbb1"
	I1101 09:58:12.007351  342768 cri.go:89] found id: "73f1aa406ac05ed7ecdeab51e324661bb9e43e2bfe78738957991c966790c739"
	I1101 09:58:12.007355  342768 cri.go:89] found id: "6fabe4bc435b38aabf3b295822c18d3e9ae184e4bd65e3255404be3ea71d8088"
	I1101 09:58:12.007358  342768 cri.go:89] found id: "e24f1c760a2388d6c3baebc8169ffcb0099781302a75e8088ffb7fe0f14abe54"
	I1101 09:58:12.007362  342768 cri.go:89] found id: ""
	I1101 09:58:12.007432  342768 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:58:12.020873  342768 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:58:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:58:12.020952  342768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:58:12.030528  342768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:58:12.030550  342768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:58:12.030601  342768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:58:12.038481  342768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:58:12.038883  342768 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-832582" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:12.038992  342768 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "ha-832582" cluster setting kubeconfig missing "ha-832582" context setting]
	I1101 09:58:12.039323  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.039866  342768 kapi.go:59] client config for ha-832582: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:58:12.040348  342768 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:58:12.040368  342768 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:58:12.040374  342768 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:58:12.040379  342768 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:58:12.040387  342768 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:58:12.040718  342768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:58:12.040811  342768 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1101 09:58:12.049163  342768 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1101 09:58:12.049190  342768 kubeadm.go:602] duration metric: took 18.632637ms to restartPrimaryControlPlane
	I1101 09:58:12.049201  342768 kubeadm.go:403] duration metric: took 75.155923ms to StartCluster
	I1101 09:58:12.049217  342768 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.049278  342768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:12.049947  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.050162  342768 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:58:12.050191  342768 start.go:242] waiting for startup goroutines ...
	I1101 09:58:12.050207  342768 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:58:12.050639  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:12.054885  342768 out.go:179] * Enabled addons: 
	I1101 09:58:12.057752  342768 addons.go:515] duration metric: took 7.532576ms for enable addons: enabled=[]
	I1101 09:58:12.057799  342768 start.go:247] waiting for cluster config update ...
	I1101 09:58:12.057809  342768 start.go:256] writing updated cluster config ...
	I1101 09:58:12.061028  342768 out.go:203] 
	I1101 09:58:12.064154  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:12.064273  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.067726  342768 out.go:179] * Starting "ha-832582-m02" control-plane node in "ha-832582" cluster
	I1101 09:58:12.070608  342768 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:58:12.073579  342768 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:58:12.076459  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:12.076487  342768 cache.go:59] Caching tarball of preloaded images
	I1101 09:58:12.076589  342768 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:58:12.076605  342768 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:58:12.076732  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.076948  342768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:58:12.105644  342768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:58:12.105664  342768 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:58:12.105677  342768 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:58:12.105715  342768 start.go:360] acquireMachinesLock for ha-832582-m02: {Name:mkf85ec55e1996c34472f8191eb83bcbd97a011b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:58:12.105766  342768 start.go:364] duration metric: took 35.365µs to acquireMachinesLock for "ha-832582-m02"
	I1101 09:58:12.105795  342768 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:58:12.105801  342768 fix.go:54] fixHost starting: m02
	I1101 09:58:12.106065  342768 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:12.131724  342768 fix.go:112] recreateIfNeeded on ha-832582-m02: state=Stopped err=<nil>
	W1101 09:58:12.131753  342768 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:58:12.135018  342768 out.go:252] * Restarting existing docker container for "ha-832582-m02" ...
	I1101 09:58:12.135097  342768 cli_runner.go:164] Run: docker start ha-832582-m02
	I1101 09:58:12.536520  342768 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:12.574712  342768 kic.go:430] container "ha-832582-m02" state is running.
	I1101 09:58:12.575112  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:12.618100  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.618407  342768 machine.go:94] provisionDockerMachine start ...
	I1101 09:58:12.618487  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:12.650389  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:12.650705  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:12.650715  342768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:58:12.651605  342768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 09:58:15.933915  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582-m02
	
	I1101 09:58:15.933941  342768 ubuntu.go:182] provisioning hostname "ha-832582-m02"
	I1101 09:58:15.934014  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:15.987460  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:15.987772  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:15.987789  342768 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-832582-m02 && echo "ha-832582-m02" | sudo tee /etc/hostname
	I1101 09:58:16.314408  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582-m02
	
	I1101 09:58:16.314487  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:16.343626  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:16.343927  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:16.343944  342768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832582-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832582-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832582-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:58:16.593142  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:58:16.593167  342768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:58:16.593184  342768 ubuntu.go:190] setting up certificates
	I1101 09:58:16.593195  342768 provision.go:84] configureAuth start
	I1101 09:58:16.593253  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:16.650326  342768 provision.go:143] copyHostCerts
	I1101 09:58:16.650367  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:16.650399  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:58:16.650411  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:16.650486  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:58:16.650567  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:16.650589  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:58:16.650600  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:16.650629  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:58:16.650674  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:16.650695  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:58:16.650703  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:16.650730  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:58:16.650781  342768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.ha-832582-m02 san=[127.0.0.1 192.168.49.3 ha-832582-m02 localhost minikube]
	I1101 09:58:16.783662  342768 provision.go:177] copyRemoteCerts
	I1101 09:58:16.783792  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:58:16.783869  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:16.825898  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:17.012062  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:58:17.012132  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:58:17.068319  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:58:17.068382  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:58:17.096494  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:58:17.096557  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:58:17.127552  342768 provision.go:87] duration metric: took 534.343053ms to configureAuth
	I1101 09:58:17.127579  342768 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:58:17.127812  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:17.127918  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.173337  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:17.173640  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:17.173660  342768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:58:17.742511  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:58:17.742535  342768 machine.go:97] duration metric: took 5.124117974s to provisionDockerMachine
	I1101 09:58:17.742546  342768 start.go:293] postStartSetup for "ha-832582-m02" (driver="docker")
	I1101 09:58:17.742557  342768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:58:17.742620  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:58:17.742669  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.776626  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:17.903612  342768 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:58:17.910004  342768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:58:17.910040  342768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:58:17.910051  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:58:17.910106  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:58:17.910182  342768 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:58:17.910189  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 09:58:17.910287  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:58:17.921230  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:17.949919  342768 start.go:296] duration metric: took 207.358478ms for postStartSetup
	I1101 09:58:17.949998  342768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:58:17.950043  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.975141  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.101002  342768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:58:18.109231  342768 fix.go:56] duration metric: took 6.003422355s for fixHost
	I1101 09:58:18.109298  342768 start.go:83] releasing machines lock for "ha-832582-m02", held for 6.003516649s
	I1101 09:58:18.109404  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:18.137736  342768 out.go:179] * Found network options:
	I1101 09:58:18.140766  342768 out.go:179]   - NO_PROXY=192.168.49.2
	W1101 09:58:18.143721  342768 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 09:58:18.143760  342768 proxy.go:120] fail to check proxy env: Error ip not in block
	I1101 09:58:18.143834  342768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:58:18.143887  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:18.144157  342768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:58:18.144209  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:18.176200  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.181012  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.454952  342768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:58:18.579173  342768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:58:18.579289  342768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:58:18.623083  342768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:58:18.623169  342768 start.go:496] detecting cgroup driver to use...
	I1101 09:58:18.623227  342768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:58:18.623296  342768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:58:18.686246  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:58:18.715168  342768 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:58:18.715306  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:58:18.776969  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:58:18.820029  342768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:58:19.203132  342768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:58:19.545263  342768 docker.go:234] disabling docker service ...
	I1101 09:58:19.545377  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:58:19.611975  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:58:19.661375  342768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:58:19.968591  342768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:58:20.322030  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:58:20.377246  342768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:58:20.428021  342768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:58:20.428136  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.448333  342768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:58:20.448440  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.494239  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.509954  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.531043  342768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:58:20.546562  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.575054  342768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.599209  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.627200  342768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:58:20.650938  342768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:58:20.674283  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:21.004512  342768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:59:51.327238  342768 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.322673918s)
	I1101 09:59:51.327311  342768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:59:51.327492  342768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:59:51.332862  342768 start.go:564] Will wait 60s for crictl version
	I1101 09:59:51.332922  342768 ssh_runner.go:195] Run: which crictl
	I1101 09:59:51.336719  342768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:59:51.365406  342768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:59:51.365490  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:59:51.395065  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:59:51.426575  342768 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:59:51.429610  342768 out.go:179]   - env NO_PROXY=192.168.49.2
	I1101 09:59:51.432670  342768 cli_runner.go:164] Run: docker network inspect ha-832582 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:59:51.449128  342768 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:59:51.452943  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:59:51.462372  342768 mustload.go:66] Loading cluster: ha-832582
	I1101 09:59:51.462608  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:59:51.462862  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:59:51.484169  342768 host.go:66] Checking if "ha-832582" exists ...
	I1101 09:59:51.484451  342768 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582 for IP: 192.168.49.3
	I1101 09:59:51.484466  342768 certs.go:195] generating shared ca certs ...
	I1101 09:59:51.484481  342768 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:59:51.484596  342768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:59:51.484637  342768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:59:51.484647  342768 certs.go:257] generating profile certs ...
	I1101 09:59:51.484720  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key
	I1101 09:59:51.484783  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.cfdf3314
	I1101 09:59:51.484827  342768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key
	I1101 09:59:51.484840  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:59:51.484853  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:59:51.484872  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:59:51.484886  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:59:51.484897  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:59:51.484912  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:59:51.484928  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:59:51.484939  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:59:51.485004  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:59:51.485035  342768 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:59:51.485049  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:59:51.485072  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:59:51.485099  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:59:51.485122  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:59:51.485167  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:59:51.485197  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 09:59:51.485216  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:51.485231  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 09:59:51.485289  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:59:51.505623  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:59:51.602013  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1101 09:59:51.606013  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1101 09:59:51.614285  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1101 09:59:51.617662  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1101 09:59:51.626190  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1101 09:59:51.629806  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1101 09:59:51.638050  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1101 09:59:51.641429  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1101 09:59:51.649504  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1101 09:59:51.653190  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1101 09:59:51.662675  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1101 09:59:51.666366  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1101 09:59:51.675666  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:59:51.694409  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:59:51.714284  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:59:51.733851  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:59:51.752947  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:59:51.773341  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:59:51.792083  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:59:51.810450  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:59:51.829646  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:59:51.849065  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:59:51.868827  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:59:51.891330  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1101 09:59:51.904911  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1101 09:59:51.918898  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1101 09:59:51.934197  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1101 09:59:51.948234  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1101 09:59:51.960997  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1101 09:59:51.975251  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1101 09:59:51.989442  342768 ssh_runner.go:195] Run: openssl version
	I1101 09:59:51.996139  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:59:52.006856  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.011576  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.011690  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.052830  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:59:52.061006  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:59:52.069890  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.074806  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.074872  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.121631  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:59:52.130945  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:59:52.140732  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.145152  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.145254  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.189261  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:59:52.197284  342768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:59:52.201018  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:59:52.244640  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:59:52.291107  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:59:52.333098  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:59:52.374947  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:59:52.416040  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:59:52.458067  342768 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1101 09:59:52.458177  342768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-832582-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:59:52.458207  342768 kube-vip.go:115] generating kube-vip config ...
	I1101 09:59:52.458257  342768 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 09:59:52.471027  342768 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:59:52.471117  342768 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 09:59:52.471214  342768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:59:52.479864  342768 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:59:52.479956  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1101 09:59:52.488040  342768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:59:52.502060  342768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:59:52.516164  342768 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 09:59:52.531779  342768 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 09:59:52.535746  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:59:52.545530  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:59:52.680054  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:59:52.695591  342768 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:59:52.696046  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:59:52.701457  342768 out.go:179] * Verifying Kubernetes components...
	I1101 09:59:52.704242  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:59:52.825960  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:59:52.841449  342768 kapi.go:59] client config for ha-832582: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1101 09:59:52.841519  342768 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1101 09:59:52.841815  342768 node_ready.go:35] waiting up to 6m0s for node "ha-832582-m02" to be "Ready" ...
	I1101 10:00:24.926942  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:00:24.927351  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1101 10:00:27.343326  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:29.843264  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:32.343360  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:34.843237  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:36.843314  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1101 10:01:43.899271  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:01:43.899642  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:55716->192.168.49.2:8443: read: connection reset by peer
	W1101 10:01:46.343035  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:48.842515  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:51.342428  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:53.843341  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:56.342335  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:58.343338  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:00.842815  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:02.843269  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:05.343114  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:07.343295  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:09.343359  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:11.843295  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1101 10:03:17.100795  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:03:17.101130  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37558->192.168.49.2:8443: read: connection reset by peer
	W1101 10:03:19.343251  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:21.843314  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:24.343238  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:26.842444  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:28.843273  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:31.343229  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:33.842318  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:35.842369  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:37.843231  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:39.843286  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:42.342431  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:44.842376  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:46.843230  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:49.343299  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:51.843196  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:54.342397  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:06.345951  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	W1101 10:04:16.346594  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	I1101 10:04:18.761391  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:04:18.761797  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:55754->192.168.49.2:8443: read: connection reset by peer
	W1101 10:04:20.842430  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:22.842572  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:24.843325  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:27.343297  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:29.842340  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:32.342396  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:34.343290  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:36.843297  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:39.342353  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:41.343002  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:43.842379  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:45.843287  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:48.343254  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:50.343337  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:52.842301  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:54.843202  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:57.343277  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:59.843343  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:01.843430  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:04.342377  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:06.343265  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:08.843265  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:11.342401  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:13.842472  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:15.843291  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:18.343216  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:20.343304  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:22.843202  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:25.342703  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:27.343208  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:29.842303  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:31.843204  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:34.342391  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:36.343286  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:38.842462  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:50.343480  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	W1101 10:05:52.842736  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": context deadline exceeded
	I1101 10:05:52.842774  342768 node_ready.go:38] duration metric: took 6m0.000936091s for node "ha-832582-m02" to be "Ready" ...
	I1101 10:05:52.846340  342768 out.go:203] 
	W1101 10:05:52.849403  342768 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1101 10:05:52.849424  342768 out.go:285] * 
	W1101 10:05:52.851598  342768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:05:52.854797  342768 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.211892535Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6d81d35d-5e3a-4a0d-95c7-fd4ce3862a7b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.212989865Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=7d9342d8-5209-4633-ada8-79262e11ab03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.213090913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.218756833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.219359632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.239436241Z" level=info msg="Created container ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=7d9342d8-5209-4633-ada8-79262e11ab03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.240120305Z" level=info msg="Starting container: ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112" id=5059d73f-d026-48cc-ab1b-20755ae53f09 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.24397708Z" level=info msg="Started container" PID=1243 containerID=ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112 description=kube-system/kube-controller-manager-ha-832582/kube-controller-manager id=5059d73f-d026-48cc-ab1b-20755ae53f09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f8bb27411a46d477c2d6c99cd3320cc05020176d2346c660a30b294ab654fd6
	Nov 01 10:05:37 ha-832582 conmon[1241]: conmon ebb69e2d4cc0850778e8 <ninfo>: container 1243 exited with status 1
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.311325101Z" level=info msg="Removing container: 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.320548328Z" level=info msg="Error loading conmon cgroup of container 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289: cgroup deleted" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.3238441Z" level=info msg="Removed container 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.209911635Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3dba77e3-5193-4cb7-857b-77c03b8eec61 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.214760967Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d11bade9-75dd-4891-a3ac-8b6ec0818fea name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.217346599Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=ddd6e3be-671f-440e-8995-91a3f805c68e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.217457231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.222294082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.222766582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.241766495Z" level=info msg="Created container c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=ddd6e3be-671f-440e-8995-91a3f805c68e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.242395494Z" level=info msg="Starting container: c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961" id=75c58720-b050-4290-a4bd-8b44e55c7a3a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.245675357Z" level=info msg="Started container" PID=1257 containerID=c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961 description=kube-system/kube-apiserver-ha-832582/kube-apiserver id=75c58720-b050-4290-a4bd-8b44e55c7a3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=04c614211235f3aea840ff0ef3962ce76f51fc82f70daa74b0ed9c0b2a0f7f66
	Nov 01 10:06:00 ha-832582 conmon[1255]: conmon c883cef2aa1b7c987d02 <ninfo>: container 1257 exited with status 255
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.37364516Z" level=info msg="Removing container: 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.380903964Z" level=info msg="Error loading conmon cgroup of container 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b: cgroup deleted" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.383910222Z" level=info msg="Removed container 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	c883cef2aa1b7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   33 seconds ago      Exited              kube-apiserver            8                   04c614211235f       kube-apiserver-ha-832582            kube-system
	ebb69e2d4cc08       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   47 seconds ago      Exited              kube-controller-manager   9                   4f8bb27411a46       kube-controller-manager-ha-832582   kube-system
	e5bbf60599882       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago       Running             etcd                      3                   51ff665c16f3c       etcd-ha-832582                      kube-system
	fefab62a504e9       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  2                   adcb5b1f5a762       kube-vip-ha-832582                  kube-system
	6fabe4bc435b3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            2                   c588a4af8fecc       kube-scheduler-ha-832582            kube-system
	e24f1c760a238       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Exited              etcd                      2                   51ff665c16f3c       etcd-ha-832582                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014572] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501039] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033197] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753566] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.779214] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:03] hrtimer: interrupt took 8309137 ns
	[Nov 1 09:28] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[  +0.061702] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 1 09:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:36] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:50] overlayfs: idmapped layers are currently not supported
	[ +32.089424] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:55] overlayfs: idmapped layers are currently not supported
	[  +4.195210] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:58] overlayfs: idmapped layers are currently not supported
	[  +4.848874] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e24f1c760a2388d6c3baebc8169ffcb0099781302a75e8088ffb7fe0f14abe54] <==
	{"level":"info","ts":"2025-11-01T10:03:28.368864Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:03:28.368907Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"ha-832582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:03:28.368997Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:03:28.370564Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:03:28.370635Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370653Z","caller":"etcdserver/server.go:1272","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-11-01T10:03:28.370677Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:03:28.370679Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T10:03:28.370784Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370801Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370832Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:03:28.370842Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370825Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.370915Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370878Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370990Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:03:28.371010Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370965Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371030Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371047Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371056Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.374519Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T10:03:28.374595Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.374658Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T10:03:28.374686Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-832582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [e5bbf60599882a44b7077046577e6c6d255753632f3ad97ed0e3d65eb2697937] <==
	{"level":"info","ts":"2025-11-01T10:06:10.258800Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:10.258848Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:10.258869Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-11-01T10:06:11.358767Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:11.358838Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:11.358857Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:11.358903Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:11.358915Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:11.859618Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-01T10:06:12.360690Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-01T10:06:12.461650Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:12.461729Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:12.461752Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:12.461782Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:12.461793Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:12.861504Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-01T10:06:13.362135Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-01T10:06:13.559351Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:13.559404Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:13.559427Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:13.559456Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:13.559468Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:13.600081Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3c3ae81873ee7e73","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T10:06:13.600151Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3c3ae81873ee7e73","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T10:06:13.863136Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 10:06:13 up  1:48,  0 user,  load average: 0.45, 0.90, 1.44
	Linux ha-832582 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961] <==
	I1101 10:05:40.306392       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1101 10:05:40.853033       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1101 10:05:40.853065       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1101 10:05:40.853075       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1101 10:05:40.853080       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1101 10:05:40.853085       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1101 10:05:40.853089       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1101 10:05:40.853093       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1101 10:05:40.853097       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1101 10:05:40.853101       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1101 10:05:40.853106       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1101 10:05:40.853110       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1101 10:05:40.853114       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1101 10:05:40.870762       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:05:40.872294       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1101 10:05:40.872930       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1101 10:05:40.879616       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:05:40.890179       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1101 10:05:40.890287       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1101 10:05:40.890929       1 instance.go:239] Using reconciler: lease
	W1101 10:05:40.892474       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:06:00.869430       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:06:00.872570       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W1101 10:06:00.892234       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1101 10:06:00.892232       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112] <==
	I1101 10:05:26.730710       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:05:27.221967       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1101 10:05:27.222053       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:05:27.223635       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 10:05:27.223814       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 10:05:27.224036       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1101 10:05:27.224086       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 10:05:37.225354       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [6fabe4bc435b38aabf3b295822c18d3e9ae184e4bd65e3255404be3ea71d8088] <==
	E1101 10:05:21.354493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:05:24.146896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:05:28.850014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:05:29.568563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:05:31.156997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:05:32.075760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:05:34.876970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:05:36.541398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:05:36.855814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:05:51.115287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:05:52.948469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:05:56.934437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:06:01.899981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50632->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:06:01.900101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50552->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:06:01.900186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50560->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:06:01.900279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50606->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:06:01.900365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50620->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:06:01.900449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50654->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:06:01.900469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50592->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:06:02.196959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:06:03.029583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:06:05.944882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:06:10.499860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:06:13.307117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:06:13.410235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	
	
	==> kubelet <==
	Nov 01 10:06:11 ha-832582 kubelet[802]: E1101 10:06:11.704371     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:11 ha-832582 kubelet[802]: E1101 10:06:11.805495     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:11 ha-832582 kubelet[802]: E1101 10:06:11.906412     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:12 ha-832582 kubelet[802]: E1101 10:06:12.007604     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:12 ha-832582 kubelet[802]: E1101 10:06:12.108154     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:12 ha-832582 kubelet[802]: E1101 10:06:12.209479     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:12 ha-832582 kubelet[802]: E1101 10:06:12.310031     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:12 ha-832582 kubelet[802]: E1101 10:06:12.411359     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:12 ha-832582 kubelet[802]: E1101 10:06:12.512818     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:12 ha-832582 kubelet[802]: E1101 10:06:12.613397     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:12 ha-832582 kubelet[802]: E1101 10:06:12.714455     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:12 ha-832582 kubelet[802]: E1101 10:06:12.815691     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:12 ha-832582 kubelet[802]: E1101 10:06:12.920499     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.023829     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.065814     802 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.125012     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.226486     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.327500     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.428317     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.529607     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.630228     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.731242     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.832752     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:13 ha-832582 kubelet[802]: E1101 10:06:13.933884     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.034837     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-832582 -n ha-832582
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-832582 -n ha-832582: exit status 2 (337.403585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-832582" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (2.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-832582 node add --control-plane --alsologtostderr -v 5: exit status 103 (412.217781ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-832582-m02 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-832582"

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:06:14.504803  346703 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:06:14.504985  346703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:06:14.505024  346703 out.go:374] Setting ErrFile to fd 2...
	I1101 10:06:14.505045  346703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:06:14.505324  346703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:06:14.505643  346703 mustload.go:66] Loading cluster: ha-832582
	I1101 10:06:14.506162  346703 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:06:14.506689  346703 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 10:06:14.529651  346703 host.go:66] Checking if "ha-832582" exists ...
	I1101 10:06:14.529986  346703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:06:14.585605  346703 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:06:14.575968487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:06:14.586084  346703 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 10:06:14.602780  346703 host.go:66] Checking if "ha-832582-m02" exists ...
	I1101 10:06:14.603083  346703 api_server.go:166] Checking apiserver status ...
	I1101 10:06:14.603151  346703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:06:14.603242  346703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 10:06:14.620327  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	W1101 10:06:14.728115  346703 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1101 10:06:14.728187  346703 out.go:285] ! The control-plane node ha-832582 apiserver is not running (will try others): (state=Stopped)
	! The control-plane node ha-832582 apiserver is not running (will try others): (state=Stopped)
	I1101 10:06:14.728199  346703 api_server.go:166] Checking apiserver status ...
	I1101 10:06:14.728255  346703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:06:14.728305  346703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 10:06:14.747434  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	W1101 10:06:14.857332  346703 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:06:14.860607  346703 out.go:179] * The control-plane node ha-832582-m02 apiserver is not running: (state=Stopped)
	I1101 10:06:14.863499  346703 out.go:179]   To start a cluster, run: "minikube start -p ha-832582"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-arm64 -p ha-832582 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-832582
helpers_test.go:243: (dbg) docker inspect ha-832582:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737",
	        "Created": "2025-11-01T09:49:47.884718242Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:58:03.201179109Z",
	            "FinishedAt": "2025-11-01T09:58:02.458383811Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/hosts",
	        "LogPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737-json.log",
	        "Name": "/ha-832582",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-832582:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-832582",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737",
	                "LowerDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-832582",
	                "Source": "/var/lib/docker/volumes/ha-832582/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-832582",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-832582",
	                "name.minikube.sigs.k8s.io": "ha-832582",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b1796f5bdac88308ffdad68dbe5a300087e1fdf42808f9a7bc9bb25df2947d",
	            "SandboxKey": "/var/run/docker/netns/f4b1796f5bda",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-832582": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:4b:56:fb:7f:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b4026c1b00639b2f23fdcf44b1c92a70df02212d3eadc8f713efc2420dc128ba",
	                    "EndpointID": "c45295fb0e9034fd21aa5c91972c347a41330627b88898fcda246b2b7e824074",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-832582",
	                        "e5a947146cd5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-832582 -n ha-832582
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-832582 -n ha-832582: exit status 2 (392.538361ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-832582 ssh -n ha-832582-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test_ha-832582-m03_ha-832582-m04.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp testdata/cp-test.txt ha-832582-m04:/home/docker/cp-test.txt                                                             │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1609765245/001/cp-test_ha-832582-m04.txt │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582:/home/docker/cp-test_ha-832582-m04_ha-832582.txt                       │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582.txt                                                 │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582-m02:/home/docker/cp-test_ha-832582-m04_ha-832582-m02.txt               │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m02 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582-m02.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582-m03:/home/docker/cp-test_ha-832582-m04_ha-832582-m03.txt               │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m03 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582-m03.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ node    │ ha-832582 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ node    │ ha-832582 node start m02 --alsologtostderr -v 5                                                                                      │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:55 UTC │
	│ node    │ ha-832582 node list --alsologtostderr -v 5                                                                                           │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │                     │
	│ stop    │ ha-832582 stop --alsologtostderr -v 5                                                                                                │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:55 UTC │
	│ start   │ ha-832582 start --wait true --alsologtostderr -v 5                                                                                   │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:57 UTC │
	│ node    │ ha-832582 node list --alsologtostderr -v 5                                                                                           │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ node    │ ha-832582 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ stop    │ ha-832582 stop --alsologtostderr -v 5                                                                                                │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:58 UTC │
	│ start   │ ha-832582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:58 UTC │                     │
	│ node    │ ha-832582 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 10:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:58:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:58:02.918042  342768 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:58:02.918211  342768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.918243  342768 out.go:374] Setting ErrFile to fd 2...
	I1101 09:58:02.918263  342768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.918533  342768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:58:02.918914  342768 out.go:368] Setting JSON to false
	I1101 09:58:02.919786  342768 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6032,"bootTime":1761985051,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:58:02.919890  342768 start.go:143] virtualization:  
	I1101 09:58:02.923079  342768 out.go:179] * [ha-832582] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:58:02.926767  342768 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:58:02.926822  342768 notify.go:221] Checking for updates...
	I1101 09:58:02.932590  342768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:58:02.935541  342768 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:02.938382  342768 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:58:02.941196  342768 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:58:02.944021  342768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:58:02.947258  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:02.947826  342768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:58:02.981516  342768 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:58:02.981632  342768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:58:03.054383  342768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:58:03.04442767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:58:03.054505  342768 docker.go:319] overlay module found
	I1101 09:58:03.057603  342768 out.go:179] * Using the docker driver based on existing profile
	I1101 09:58:03.060439  342768 start.go:309] selected driver: docker
	I1101 09:58:03.060472  342768 start.go:930] validating driver "docker" against &{Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:03.060601  342768 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:58:03.060705  342768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:58:03.115910  342768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:58:03.107176811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:58:03.116329  342768 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:58:03.116359  342768 cni.go:84] Creating CNI manager for ""
	I1101 09:58:03.116411  342768 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1101 09:58:03.116461  342768 start.go:353] cluster config:
	{Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:03.119656  342768 out.go:179] * Starting "ha-832582" primary control-plane node in "ha-832582" cluster
	I1101 09:58:03.122400  342768 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:58:03.125294  342768 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:58:03.128178  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:03.128237  342768 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:58:03.128250  342768 cache.go:59] Caching tarball of preloaded images
	I1101 09:58:03.128253  342768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:58:03.128348  342768 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:58:03.128359  342768 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:58:03.128499  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:03.147945  342768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:58:03.147967  342768 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:58:03.147995  342768 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:58:03.148022  342768 start.go:360] acquireMachinesLock for ha-832582: {Name:mk797b578da0c53fbacfede5c9484035101b2ded Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:58:03.148089  342768 start.go:364] duration metric: took 45.35µs to acquireMachinesLock for "ha-832582"
	I1101 09:58:03.148111  342768 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:58:03.148119  342768 fix.go:54] fixHost starting: 
	I1101 09:58:03.148373  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:03.165181  342768 fix.go:112] recreateIfNeeded on ha-832582: state=Stopped err=<nil>
	W1101 09:58:03.165215  342768 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:58:03.168512  342768 out.go:252] * Restarting existing docker container for "ha-832582" ...
	I1101 09:58:03.168595  342768 cli_runner.go:164] Run: docker start ha-832582
	I1101 09:58:03.407252  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:03.433226  342768 kic.go:430] container "ha-832582" state is running.
	I1101 09:58:03.433643  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:03.456608  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:03.456845  342768 machine.go:94] provisionDockerMachine start ...
	I1101 09:58:03.456903  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:03.480040  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:03.480367  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:03.480376  342768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:58:03.480952  342768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60776->127.0.0.1:33199: read: connection reset by peer
	I1101 09:58:06.633155  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582
	
	I1101 09:58:06.633179  342768 ubuntu.go:182] provisioning hostname "ha-832582"
	I1101 09:58:06.633238  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:06.651044  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:06.651360  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:06.651374  342768 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-832582 && echo "ha-832582" | sudo tee /etc/hostname
	I1101 09:58:06.812426  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582
	
	I1101 09:58:06.812507  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:06.832800  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:06.833109  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:06.833135  342768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832582' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832582/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832582' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:58:06.978124  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:58:06.978162  342768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:58:06.978183  342768 ubuntu.go:190] setting up certificates
	I1101 09:58:06.978200  342768 provision.go:84] configureAuth start
	I1101 09:58:06.978265  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:06.995491  342768 provision.go:143] copyHostCerts
	I1101 09:58:06.995536  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:06.995574  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:58:06.995588  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:06.995674  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:58:06.995773  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:06.995796  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:58:06.995810  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:06.995841  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:58:06.995930  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:06.995952  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:58:06.995964  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:06.995990  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:58:06.996061  342768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.ha-832582 san=[127.0.0.1 192.168.49.2 ha-832582 localhost minikube]
	I1101 09:58:07.519067  342768 provision.go:177] copyRemoteCerts
	I1101 09:58:07.519138  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:58:07.519200  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:07.536957  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:07.642333  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:58:07.642391  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:58:07.660960  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:58:07.661018  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:58:07.677785  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:58:07.677843  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1101 09:58:07.694547  342768 provision.go:87] duration metric: took 716.319917ms to configureAuth
	I1101 09:58:07.694583  342768 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:58:07.694801  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:07.694909  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:07.712779  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:07.713093  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:07.713114  342768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:58:08.052242  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:58:08.052306  342768 machine.go:97] duration metric: took 4.595450733s to provisionDockerMachine
	I1101 09:58:08.052334  342768 start.go:293] postStartSetup for "ha-832582" (driver="docker")
	I1101 09:58:08.052361  342768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:58:08.052459  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:58:08.052536  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.073358  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.177812  342768 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:58:08.181279  342768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:58:08.181304  342768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:58:08.181314  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:58:08.181367  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:58:08.181443  342768 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:58:08.181461  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 09:58:08.181557  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:58:08.189009  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:08.205960  342768 start.go:296] duration metric: took 153.59516ms for postStartSetup
	I1101 09:58:08.206069  342768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:58:08.206130  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.222745  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.322878  342768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:58:08.327536  342768 fix.go:56] duration metric: took 5.179409798s for fixHost
	I1101 09:58:08.327559  342768 start.go:83] releasing machines lock for "ha-832582", held for 5.179459334s
	I1101 09:58:08.327648  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:08.343793  342768 ssh_runner.go:195] Run: cat /version.json
	I1101 09:58:08.343844  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.344088  342768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:58:08.344140  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.362917  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.364182  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.559877  342768 ssh_runner.go:195] Run: systemctl --version
	I1101 09:58:08.566123  342768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:58:08.601278  342768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:58:08.606120  342768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:58:08.606226  342768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:58:08.613618  342768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:58:08.613639  342768 start.go:496] detecting cgroup driver to use...
	I1101 09:58:08.613670  342768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:58:08.613775  342768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:58:08.628944  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:58:08.641906  342768 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:58:08.641985  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:58:08.657234  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:58:08.670311  342768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:58:08.776949  342768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:58:08.895687  342768 docker.go:234] disabling docker service ...
	I1101 09:58:08.895763  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:58:08.912227  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:58:08.924716  342768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:58:09.033164  342768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:58:09.152553  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:58:09.165610  342768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:58:09.180758  342768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:58:09.180842  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.190144  342768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:58:09.190223  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.199488  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.208470  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.217564  342768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:58:09.226234  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.235095  342768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.243429  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.252434  342768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:58:09.260020  342768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:58:09.267457  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:09.373363  342768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:58:09.495940  342768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:58:09.496021  342768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:58:09.499937  342768 start.go:564] Will wait 60s for crictl version
	I1101 09:58:09.500082  342768 ssh_runner.go:195] Run: which crictl
	I1101 09:58:09.503791  342768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:58:09.533304  342768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:58:09.533395  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:58:09.560842  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:58:09.595644  342768 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:58:09.598486  342768 cli_runner.go:164] Run: docker network inspect ha-832582 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:58:09.614798  342768 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:58:09.618883  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:58:09.629569  342768 kubeadm.go:884] updating cluster {Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:58:09.629840  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:09.629912  342768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:58:09.667936  342768 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:58:09.667962  342768 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:58:09.668023  342768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:58:09.693223  342768 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:58:09.693250  342768 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:58:09.693259  342768 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:58:09.693353  342768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-832582 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:58:09.693438  342768 ssh_runner.go:195] Run: crio config
	I1101 09:58:09.751790  342768 cni.go:84] Creating CNI manager for ""
	I1101 09:58:09.751814  342768 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1101 09:58:09.751834  342768 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:58:09.751876  342768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-832582 NodeName:ha-832582 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:58:09.752075  342768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-832582"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:58:09.752102  342768 kube-vip.go:115] generating kube-vip config ...
	I1101 09:58:09.752152  342768 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 09:58:09.764023  342768 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:58:09.764122  342768 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 09:58:09.764180  342768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:58:09.772107  342768 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:58:09.772242  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1101 09:58:09.779796  342768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1101 09:58:09.792458  342768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:58:09.805570  342768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1101 09:58:09.818435  342768 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 09:58:09.831753  342768 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 09:58:09.835442  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:58:09.845042  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:09.952431  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:58:09.969023  342768 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582 for IP: 192.168.49.2
	I1101 09:58:09.969056  342768 certs.go:195] generating shared ca certs ...
	I1101 09:58:09.969072  342768 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:09.969241  342768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:58:09.969294  342768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:58:09.969307  342768 certs.go:257] generating profile certs ...
	I1101 09:58:09.969413  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key
	I1101 09:58:09.969456  342768 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2
	I1101 09:58:09.969474  342768 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1101 09:58:10.972603  342768 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 ...
	I1101 09:58:10.972640  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2: {Name:mka954bd27ed170438bba591673547458d094ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:10.972825  342768 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2 ...
	I1101 09:58:10.972842  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2: {Name:mk1061e2154b96baf6cb0ecee80a8eda645c1f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:10.972926  342768 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt
	I1101 09:58:10.973062  342768 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key
	I1101 09:58:10.973204  342768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key
	I1101 09:58:10.973222  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:58:10.973238  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:58:10.973256  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:58:10.973273  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:58:10.973288  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:58:10.973300  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:58:10.973317  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:58:10.973327  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:58:10.973379  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:58:10.973412  342768 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:58:10.973425  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:58:10.973451  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:58:10.973476  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:58:10.973504  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:58:10.973552  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:10.973584  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:10.973600  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 09:58:10.973611  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 09:58:10.977021  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:58:11.008672  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:58:11.039364  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:58:11.065401  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:58:11.091095  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:58:11.131902  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:58:11.164406  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:58:11.198225  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:58:11.249652  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:58:11.275181  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:58:11.313024  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:58:11.348627  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:58:11.371097  342768 ssh_runner.go:195] Run: openssl version
	I1101 09:58:11.381650  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:58:11.392802  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.397197  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.397269  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.466322  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:58:11.480286  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:58:11.490726  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.498361  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.498428  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.561754  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:58:11.576548  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:58:11.591018  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.595330  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.595393  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.664138  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:58:11.673663  342768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:58:11.677777  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:58:11.749190  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:58:11.791873  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:58:11.837053  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:58:11.885168  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:58:11.930387  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:58:11.974056  342768 kubeadm.go:401] StartCluster: {Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:11.974182  342768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:58:11.974253  342768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:58:12.007321  342768 cri.go:89] found id: "63f97ad5786a65d9b80ca88d289828cdda4b430f39036c771011f4f9a81dca4f"
	I1101 09:58:12.007345  342768 cri.go:89] found id: "fefab62a504e911c9eccaa75d59925b8ef3f49ca7726398893bf175da792fbb1"
	I1101 09:58:12.007351  342768 cri.go:89] found id: "73f1aa406ac05ed7ecdeab51e324661bb9e43e2bfe78738957991c966790c739"
	I1101 09:58:12.007355  342768 cri.go:89] found id: "6fabe4bc435b38aabf3b295822c18d3e9ae184e4bd65e3255404be3ea71d8088"
	I1101 09:58:12.007358  342768 cri.go:89] found id: "e24f1c760a2388d6c3baebc8169ffcb0099781302a75e8088ffb7fe0f14abe54"
	I1101 09:58:12.007362  342768 cri.go:89] found id: ""
	I1101 09:58:12.007432  342768 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:58:12.020873  342768 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:58:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:58:12.020952  342768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:58:12.030528  342768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:58:12.030550  342768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:58:12.030601  342768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:58:12.038481  342768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:58:12.038883  342768 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-832582" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:12.038992  342768 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "ha-832582" cluster setting kubeconfig missing "ha-832582" context setting]
	I1101 09:58:12.039323  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.039866  342768 kapi.go:59] client config for ha-832582: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:58:12.040348  342768 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:58:12.040368  342768 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:58:12.040374  342768 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:58:12.040379  342768 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:58:12.040387  342768 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:58:12.040718  342768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:58:12.040811  342768 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1101 09:58:12.049163  342768 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1101 09:58:12.049190  342768 kubeadm.go:602] duration metric: took 18.632637ms to restartPrimaryControlPlane
	I1101 09:58:12.049201  342768 kubeadm.go:403] duration metric: took 75.155923ms to StartCluster
	I1101 09:58:12.049217  342768 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.049278  342768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:12.049947  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.050162  342768 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:58:12.050191  342768 start.go:242] waiting for startup goroutines ...
	I1101 09:58:12.050207  342768 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:58:12.050639  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:12.054885  342768 out.go:179] * Enabled addons: 
	I1101 09:58:12.057752  342768 addons.go:515] duration metric: took 7.532576ms for enable addons: enabled=[]
	I1101 09:58:12.057799  342768 start.go:247] waiting for cluster config update ...
	I1101 09:58:12.057809  342768 start.go:256] writing updated cluster config ...
	I1101 09:58:12.061028  342768 out.go:203] 
	I1101 09:58:12.064154  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:12.064273  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.067726  342768 out.go:179] * Starting "ha-832582-m02" control-plane node in "ha-832582" cluster
	I1101 09:58:12.070608  342768 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:58:12.073579  342768 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:58:12.076459  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:12.076487  342768 cache.go:59] Caching tarball of preloaded images
	I1101 09:58:12.076589  342768 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:58:12.076605  342768 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:58:12.076732  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.076948  342768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:58:12.105644  342768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:58:12.105664  342768 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:58:12.105677  342768 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:58:12.105715  342768 start.go:360] acquireMachinesLock for ha-832582-m02: {Name:mkf85ec55e1996c34472f8191eb83bcbd97a011b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:58:12.105766  342768 start.go:364] duration metric: took 35.365µs to acquireMachinesLock for "ha-832582-m02"
	I1101 09:58:12.105795  342768 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:58:12.105801  342768 fix.go:54] fixHost starting: m02
	I1101 09:58:12.106065  342768 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:12.131724  342768 fix.go:112] recreateIfNeeded on ha-832582-m02: state=Stopped err=<nil>
	W1101 09:58:12.131753  342768 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:58:12.135018  342768 out.go:252] * Restarting existing docker container for "ha-832582-m02" ...
	I1101 09:58:12.135097  342768 cli_runner.go:164] Run: docker start ha-832582-m02
	I1101 09:58:12.536520  342768 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:12.574712  342768 kic.go:430] container "ha-832582-m02" state is running.
	I1101 09:58:12.575112  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:12.618100  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.618407  342768 machine.go:94] provisionDockerMachine start ...
	I1101 09:58:12.618487  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:12.650389  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:12.650705  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:12.650715  342768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:58:12.651605  342768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 09:58:15.933915  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582-m02
	
	I1101 09:58:15.933941  342768 ubuntu.go:182] provisioning hostname "ha-832582-m02"
	I1101 09:58:15.934014  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:15.987460  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:15.987772  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:15.987789  342768 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-832582-m02 && echo "ha-832582-m02" | sudo tee /etc/hostname
	I1101 09:58:16.314408  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582-m02
	
	I1101 09:58:16.314487  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:16.343626  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:16.343927  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:16.343944  342768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832582-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832582-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832582-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:58:16.593142  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:58:16.593167  342768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:58:16.593184  342768 ubuntu.go:190] setting up certificates
	I1101 09:58:16.593195  342768 provision.go:84] configureAuth start
	I1101 09:58:16.593253  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:16.650326  342768 provision.go:143] copyHostCerts
	I1101 09:58:16.650367  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:16.650399  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:58:16.650411  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:16.650486  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:58:16.650567  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:16.650589  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:58:16.650600  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:16.650629  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:58:16.650674  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:16.650695  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:58:16.650703  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:16.650730  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:58:16.650781  342768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.ha-832582-m02 san=[127.0.0.1 192.168.49.3 ha-832582-m02 localhost minikube]
	I1101 09:58:16.783662  342768 provision.go:177] copyRemoteCerts
	I1101 09:58:16.783792  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:58:16.783869  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:16.825898  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:17.012062  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:58:17.012132  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:58:17.068319  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:58:17.068382  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:58:17.096494  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:58:17.096557  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:58:17.127552  342768 provision.go:87] duration metric: took 534.343053ms to configureAuth
	I1101 09:58:17.127579  342768 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:58:17.127812  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:17.127918  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.173337  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:17.173640  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:17.173660  342768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:58:17.742511  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:58:17.742535  342768 machine.go:97] duration metric: took 5.124117974s to provisionDockerMachine
	I1101 09:58:17.742546  342768 start.go:293] postStartSetup for "ha-832582-m02" (driver="docker")
	I1101 09:58:17.742557  342768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:58:17.742620  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:58:17.742669  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.776626  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:17.903612  342768 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:58:17.910004  342768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:58:17.910040  342768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:58:17.910051  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:58:17.910106  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:58:17.910182  342768 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:58:17.910189  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 09:58:17.910287  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:58:17.921230  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:17.949919  342768 start.go:296] duration metric: took 207.358478ms for postStartSetup
	I1101 09:58:17.949998  342768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:58:17.950043  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.975141  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.101002  342768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:58:18.109231  342768 fix.go:56] duration metric: took 6.003422355s for fixHost
	I1101 09:58:18.109298  342768 start.go:83] releasing machines lock for "ha-832582-m02", held for 6.003516649s
	I1101 09:58:18.109404  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:18.137736  342768 out.go:179] * Found network options:
	I1101 09:58:18.140766  342768 out.go:179]   - NO_PROXY=192.168.49.2
	W1101 09:58:18.143721  342768 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 09:58:18.143760  342768 proxy.go:120] fail to check proxy env: Error ip not in block
	I1101 09:58:18.143834  342768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:58:18.143887  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:18.144157  342768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:58:18.144209  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:18.176200  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.181012  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.454952  342768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:58:18.579173  342768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:58:18.579289  342768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:58:18.623083  342768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:58:18.623169  342768 start.go:496] detecting cgroup driver to use...
	I1101 09:58:18.623227  342768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:58:18.623296  342768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:58:18.686246  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:58:18.715168  342768 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:58:18.715306  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:58:18.776969  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:58:18.820029  342768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:58:19.203132  342768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:58:19.545263  342768 docker.go:234] disabling docker service ...
	I1101 09:58:19.545377  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:58:19.611975  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:58:19.661375  342768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:58:19.968591  342768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:58:20.322030  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:58:20.377246  342768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:58:20.428021  342768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:58:20.428136  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.448333  342768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:58:20.448440  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.494239  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.509954  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.531043  342768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:58:20.546562  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.575054  342768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.599209  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.627200  342768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:58:20.650938  342768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:58:20.674283  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:21.004512  342768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:59:51.327238  342768 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.322673918s)
	I1101 09:59:51.327311  342768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:59:51.327492  342768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:59:51.332862  342768 start.go:564] Will wait 60s for crictl version
	I1101 09:59:51.332922  342768 ssh_runner.go:195] Run: which crictl
	I1101 09:59:51.336719  342768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:59:51.365406  342768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:59:51.365490  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:59:51.395065  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:59:51.426575  342768 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:59:51.429610  342768 out.go:179]   - env NO_PROXY=192.168.49.2
	I1101 09:59:51.432670  342768 cli_runner.go:164] Run: docker network inspect ha-832582 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:59:51.449128  342768 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:59:51.452943  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:59:51.462372  342768 mustload.go:66] Loading cluster: ha-832582
	I1101 09:59:51.462608  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:59:51.462862  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:59:51.484169  342768 host.go:66] Checking if "ha-832582" exists ...
	I1101 09:59:51.484451  342768 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582 for IP: 192.168.49.3
	I1101 09:59:51.484466  342768 certs.go:195] generating shared ca certs ...
	I1101 09:59:51.484481  342768 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:59:51.484596  342768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:59:51.484637  342768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:59:51.484647  342768 certs.go:257] generating profile certs ...
	I1101 09:59:51.484720  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key
	I1101 09:59:51.484783  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.cfdf3314
	I1101 09:59:51.484827  342768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key
	I1101 09:59:51.484840  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:59:51.484853  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:59:51.484872  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:59:51.484886  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:59:51.484897  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:59:51.484912  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:59:51.484928  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:59:51.484939  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:59:51.485004  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:59:51.485035  342768 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:59:51.485049  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:59:51.485072  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:59:51.485099  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:59:51.485122  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:59:51.485167  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:59:51.485197  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 09:59:51.485216  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:51.485231  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 09:59:51.485289  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:59:51.505623  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:59:51.602013  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1101 09:59:51.606013  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1101 09:59:51.614285  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1101 09:59:51.617662  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1101 09:59:51.626190  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1101 09:59:51.629806  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1101 09:59:51.638050  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1101 09:59:51.641429  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1101 09:59:51.649504  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1101 09:59:51.653190  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1101 09:59:51.662675  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1101 09:59:51.666366  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1101 09:59:51.675666  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:59:51.694409  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:59:51.714284  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:59:51.733851  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:59:51.752947  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:59:51.773341  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:59:51.792083  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:59:51.810450  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:59:51.829646  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:59:51.849065  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:59:51.868827  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:59:51.891330  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1101 09:59:51.904911  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1101 09:59:51.918898  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1101 09:59:51.934197  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1101 09:59:51.948234  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1101 09:59:51.960997  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1101 09:59:51.975251  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1101 09:59:51.989442  342768 ssh_runner.go:195] Run: openssl version
	I1101 09:59:51.996139  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:59:52.006856  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.011576  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.011690  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.052830  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:59:52.061006  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:59:52.069890  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.074806  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.074872  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.121631  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:59:52.130945  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:59:52.140732  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.145152  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.145254  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.189261  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:59:52.197284  342768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:59:52.201018  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:59:52.244640  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:59:52.291107  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:59:52.333098  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:59:52.374947  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:59:52.416040  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:59:52.458067  342768 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1101 09:59:52.458177  342768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-832582-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:59:52.458207  342768 kube-vip.go:115] generating kube-vip config ...
	I1101 09:59:52.458257  342768 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 09:59:52.471027  342768 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:59:52.471117  342768 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 09:59:52.471214  342768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:59:52.479864  342768 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:59:52.479956  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1101 09:59:52.488040  342768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:59:52.502060  342768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:59:52.516164  342768 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 09:59:52.531779  342768 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 09:59:52.535746  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:59:52.545530  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:59:52.680054  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:59:52.695591  342768 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:59:52.696046  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:59:52.701457  342768 out.go:179] * Verifying Kubernetes components...
	I1101 09:59:52.704242  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:59:52.825960  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:59:52.841449  342768 kapi.go:59] client config for ha-832582: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1101 09:59:52.841519  342768 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1101 09:59:52.841815  342768 node_ready.go:35] waiting up to 6m0s for node "ha-832582-m02" to be "Ready" ...
	I1101 10:00:24.926942  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:00:24.927351  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1101 10:00:27.343326  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:29.843264  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:32.343360  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:34.843237  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:36.843314  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1101 10:01:43.899271  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:01:43.899642  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:55716->192.168.49.2:8443: read: connection reset by peer
	W1101 10:01:46.343035  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:48.842515  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:51.342428  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:53.843341  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:56.342335  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:58.343338  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:00.842815  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:02.843269  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:05.343114  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:07.343295  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:09.343359  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:11.843295  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1101 10:03:17.100795  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:03:17.101130  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37558->192.168.49.2:8443: read: connection reset by peer
	W1101 10:03:19.343251  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:21.843314  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:24.343238  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:26.842444  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:28.843273  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:31.343229  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:33.842318  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:35.842369  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:37.843231  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:39.843286  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:42.342431  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:44.842376  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:46.843230  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:49.343299  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:51.843196  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:54.342397  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:06.345951  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	W1101 10:04:16.346594  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	I1101 10:04:18.761391  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:04:18.761797  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:55754->192.168.49.2:8443: read: connection reset by peer
	W1101 10:04:20.842430  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:22.842572  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:24.843325  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:27.343297  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:29.842340  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:32.342396  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:34.343290  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:36.843297  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:39.342353  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:41.343002  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:43.842379  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:45.843287  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:48.343254  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:50.343337  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:52.842301  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:54.843202  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:57.343277  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:59.843343  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:01.843430  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:04.342377  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:06.343265  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:08.843265  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:11.342401  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:13.842472  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:15.843291  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:18.343216  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:20.343304  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:22.843202  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:25.342703  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:27.343208  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:29.842303  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:31.843204  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:34.342391  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:36.343286  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:38.842462  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:50.343480  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	W1101 10:05:52.842736  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": context deadline exceeded
	I1101 10:05:52.842774  342768 node_ready.go:38] duration metric: took 6m0.000936091s for node "ha-832582-m02" to be "Ready" ...
	I1101 10:05:52.846340  342768 out.go:203] 
	W1101 10:05:52.849403  342768 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1101 10:05:52.849424  342768 out.go:285] * 
	W1101 10:05:52.851598  342768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:05:52.854797  342768 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.211892535Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6d81d35d-5e3a-4a0d-95c7-fd4ce3862a7b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.212989865Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=7d9342d8-5209-4633-ada8-79262e11ab03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.213090913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.218756833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.219359632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.239436241Z" level=info msg="Created container ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=7d9342d8-5209-4633-ada8-79262e11ab03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.240120305Z" level=info msg="Starting container: ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112" id=5059d73f-d026-48cc-ab1b-20755ae53f09 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.24397708Z" level=info msg="Started container" PID=1243 containerID=ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112 description=kube-system/kube-controller-manager-ha-832582/kube-controller-manager id=5059d73f-d026-48cc-ab1b-20755ae53f09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f8bb27411a46d477c2d6c99cd3320cc05020176d2346c660a30b294ab654fd6
	Nov 01 10:05:37 ha-832582 conmon[1241]: conmon ebb69e2d4cc0850778e8 <ninfo>: container 1243 exited with status 1
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.311325101Z" level=info msg="Removing container: 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.320548328Z" level=info msg="Error loading conmon cgroup of container 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289: cgroup deleted" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.3238441Z" level=info msg="Removed container 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.209911635Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3dba77e3-5193-4cb7-857b-77c03b8eec61 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.214760967Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d11bade9-75dd-4891-a3ac-8b6ec0818fea name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.217346599Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=ddd6e3be-671f-440e-8995-91a3f805c68e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.217457231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.222294082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.222766582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.241766495Z" level=info msg="Created container c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=ddd6e3be-671f-440e-8995-91a3f805c68e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.242395494Z" level=info msg="Starting container: c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961" id=75c58720-b050-4290-a4bd-8b44e55c7a3a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.245675357Z" level=info msg="Started container" PID=1257 containerID=c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961 description=kube-system/kube-apiserver-ha-832582/kube-apiserver id=75c58720-b050-4290-a4bd-8b44e55c7a3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=04c614211235f3aea840ff0ef3962ce76f51fc82f70daa74b0ed9c0b2a0f7f66
	Nov 01 10:06:00 ha-832582 conmon[1255]: conmon c883cef2aa1b7c987d02 <ninfo>: container 1257 exited with status 255
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.37364516Z" level=info msg="Removing container: 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.380903964Z" level=info msg="Error loading conmon cgroup of container 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b: cgroup deleted" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.383910222Z" level=info msg="Removed container 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	c883cef2aa1b7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   35 seconds ago      Exited              kube-apiserver            8                   04c614211235f       kube-apiserver-ha-832582            kube-system
	ebb69e2d4cc08       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   49 seconds ago      Exited              kube-controller-manager   9                   4f8bb27411a46       kube-controller-manager-ha-832582   kube-system
	e5bbf60599882       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago       Running             etcd                      3                   51ff665c16f3c       etcd-ha-832582                      kube-system
	fefab62a504e9       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  2                   adcb5b1f5a762       kube-vip-ha-832582                  kube-system
	6fabe4bc435b3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            2                   c588a4af8fecc       kube-scheduler-ha-832582            kube-system
	e24f1c760a238       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Exited              etcd                      2                   51ff665c16f3c       etcd-ha-832582                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014572] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501039] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033197] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753566] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.779214] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:03] hrtimer: interrupt took 8309137 ns
	[Nov 1 09:28] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[  +0.061702] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 1 09:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:36] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:50] overlayfs: idmapped layers are currently not supported
	[ +32.089424] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:55] overlayfs: idmapped layers are currently not supported
	[  +4.195210] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:58] overlayfs: idmapped layers are currently not supported
	[  +4.848874] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e24f1c760a2388d6c3baebc8169ffcb0099781302a75e8088ffb7fe0f14abe54] <==
	{"level":"info","ts":"2025-11-01T10:03:28.368864Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:03:28.368907Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"ha-832582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:03:28.368997Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:03:28.370564Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:03:28.370635Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370653Z","caller":"etcdserver/server.go:1272","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-11-01T10:03:28.370677Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:03:28.370679Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T10:03:28.370784Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370801Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370832Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:03:28.370842Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370825Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.370915Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370878Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370990Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:03:28.371010Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370965Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371030Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371047Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371056Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.374519Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T10:03:28.374595Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.374658Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T10:03:28.374686Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-832582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [e5bbf60599882a44b7077046577e6c6d255753632f3ad97ed0e3d65eb2697937] <==
	{"level":"info","ts":"2025-11-01T10:06:12.461793Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:12.861504Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-01T10:06:13.362135Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-01T10:06:13.559351Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:13.559404Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:13.559427Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:13.559456Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:13.559468Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:13.600081Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3c3ae81873ee7e73","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T10:06:13.600151Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3c3ae81873ee7e73","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T10:06:13.863136Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-01T10:06:14.363829Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-01T10:06:14.659207Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:14.659262Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:14.659283Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:14.659321Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:14.659332Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:14.864866Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-01T10:06:15.365666Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-01T10:06:15.758907Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:15.758969Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:15.758993Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:15.759026Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:15.759037Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:15.866744Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 10:06:16 up  1:48,  0 user,  load average: 0.45, 0.90, 1.44
	Linux ha-832582 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961] <==
	I1101 10:05:40.306392       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1101 10:05:40.853033       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1101 10:05:40.853065       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1101 10:05:40.853075       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1101 10:05:40.853080       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1101 10:05:40.853085       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1101 10:05:40.853089       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1101 10:05:40.853093       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1101 10:05:40.853097       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1101 10:05:40.853101       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1101 10:05:40.853106       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1101 10:05:40.853110       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1101 10:05:40.853114       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1101 10:05:40.870762       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:05:40.872294       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1101 10:05:40.872930       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1101 10:05:40.879616       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:05:40.890179       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1101 10:05:40.890287       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1101 10:05:40.890929       1 instance.go:239] Using reconciler: lease
	W1101 10:05:40.892474       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:06:00.869430       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:06:00.872570       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W1101 10:06:00.892234       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1101 10:06:00.892232       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112] <==
	I1101 10:05:26.730710       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:05:27.221967       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1101 10:05:27.222053       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:05:27.223635       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 10:05:27.223814       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 10:05:27.224036       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1101 10:05:27.224086       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 10:05:37.225354       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [6fabe4bc435b38aabf3b295822c18d3e9ae184e4bd65e3255404be3ea71d8088] <==
	E1101 10:05:21.354493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:05:24.146896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:05:28.850014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:05:29.568563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:05:31.156997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:05:32.075760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:05:34.876970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:05:36.541398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:05:36.855814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:05:51.115287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:05:52.948469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:05:56.934437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:06:01.899981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50632->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:06:01.900101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50552->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:06:01.900186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50560->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:06:01.900279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50606->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:06:01.900365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50620->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:06:01.900449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50654->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:06:01.900469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50592->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:06:02.196959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:06:03.029583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:06:05.944882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:06:10.499860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:06:13.307117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:06:13.410235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	
	
	==> kubelet <==
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.133388     802 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8443/api/v1/namespaces/default/events/ha-832582.1873d98fa3bf3118\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-832582.1873d98fa3bf3118  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-832582,UID:ha-832582,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-832582 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-832582,},FirstTimestamp:2025-11-01 09:58:10.182762776 +0000 UTC m=+0.209266556,LastTimestamp:2025-11-01 09:58:10.288605721 +0000 UTC m=+0.315109501,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-832582,}"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.136155     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.237367     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.338623     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.439797     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.541021     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.641920     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.743107     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.844288     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:14 ha-832582 kubelet[802]: I1101 10:06:14.903317     802 kubelet_node_status.go:75] "Attempting to register node" node="ha-832582"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.903910     802 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-832582"
	Nov 01 10:06:14 ha-832582 kubelet[802]: E1101 10:06:14.944783     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.045574     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.147407     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.248922     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.349779     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.450564     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.551377     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.652605     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.753611     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.855578     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.902213     802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-832582?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.956325     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.057844     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.159368     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-832582 -n ha-832582
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-832582 -n ha-832582: exit status 2 (335.619963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-832582" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (2.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:305: expected profile "ha-832582" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-832582\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-832582\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-832582\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",
\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false
,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SS
HAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-832582" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-832582\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-832582\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-832582\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"re
gistry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Static
IP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-832582
helpers_test.go:243: (dbg) docker inspect ha-832582:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737",
	        "Created": "2025-11-01T09:49:47.884718242Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:58:03.201179109Z",
	            "FinishedAt": "2025-11-01T09:58:02.458383811Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/hosts",
	        "LogPath": "/var/lib/docker/containers/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737-json.log",
	        "Name": "/ha-832582",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-832582:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-832582",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737",
	                "LowerDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3b199af258ef4de1c0b42fda6ff3a586cf0532a7a45c32f7487490a832affe8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-832582",
	                "Source": "/var/lib/docker/volumes/ha-832582/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-832582",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-832582",
	                "name.minikube.sigs.k8s.io": "ha-832582",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b1796f5bdac88308ffdad68dbe5a300087e1fdf42808f9a7bc9bb25df2947d",
	            "SandboxKey": "/var/run/docker/netns/f4b1796f5bda",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-832582": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:4b:56:fb:7f:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b4026c1b00639b2f23fdcf44b1c92a70df02212d3eadc8f713efc2420dc128ba",
	                    "EndpointID": "c45295fb0e9034fd21aa5c91972c347a41330627b88898fcda246b2b7e824074",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-832582",
	                        "e5a947146cd5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-832582 -n ha-832582
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-832582 -n ha-832582: exit status 2 (332.823134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-832582 ssh -n ha-832582-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test_ha-832582-m03_ha-832582-m04.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp testdata/cp-test.txt ha-832582-m04:/home/docker/cp-test.txt                                                             │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1609765245/001/cp-test_ha-832582-m04.txt │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582:/home/docker/cp-test_ha-832582-m04_ha-832582.txt                       │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582.txt                                                 │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582-m02:/home/docker/cp-test_ha-832582-m04_ha-832582-m02.txt               │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m02 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582-m02.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ cp      │ ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582-m03:/home/docker/cp-test_ha-832582-m04_ha-832582-m03.txt               │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ ssh     │ ha-832582 ssh -n ha-832582-m03 sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582-m03.txt                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ node    │ ha-832582 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ node    │ ha-832582 node start m02 --alsologtostderr -v 5                                                                                      │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:55 UTC │
	│ node    │ ha-832582 node list --alsologtostderr -v 5                                                                                           │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │                     │
	│ stop    │ ha-832582 stop --alsologtostderr -v 5                                                                                                │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:55 UTC │
	│ start   │ ha-832582 start --wait true --alsologtostderr -v 5                                                                                   │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:57 UTC │
	│ node    │ ha-832582 node list --alsologtostderr -v 5                                                                                           │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ node    │ ha-832582 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ stop    │ ha-832582 stop --alsologtostderr -v 5                                                                                                │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:58 UTC │
	│ start   │ ha-832582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 09:58 UTC │                     │
	│ node    │ ha-832582 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-832582 │ jenkins │ v1.37.0 │ 01 Nov 25 10:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:58:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:58:02.918042  342768 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:58:02.918211  342768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.918243  342768 out.go:374] Setting ErrFile to fd 2...
	I1101 09:58:02.918263  342768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.918533  342768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:58:02.918914  342768 out.go:368] Setting JSON to false
	I1101 09:58:02.919786  342768 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6032,"bootTime":1761985051,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:58:02.919890  342768 start.go:143] virtualization:  
	I1101 09:58:02.923079  342768 out.go:179] * [ha-832582] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:58:02.926767  342768 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:58:02.926822  342768 notify.go:221] Checking for updates...
	I1101 09:58:02.932590  342768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:58:02.935541  342768 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:02.938382  342768 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:58:02.941196  342768 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:58:02.944021  342768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:58:02.947258  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:02.947826  342768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:58:02.981516  342768 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:58:02.981632  342768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:58:03.054383  342768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:58:03.04442767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:58:03.054505  342768 docker.go:319] overlay module found
	I1101 09:58:03.057603  342768 out.go:179] * Using the docker driver based on existing profile
	I1101 09:58:03.060439  342768 start.go:309] selected driver: docker
	I1101 09:58:03.060472  342768 start.go:930] validating driver "docker" against &{Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:03.060601  342768 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:58:03.060705  342768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:58:03.115910  342768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:58:03.107176811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:58:03.116329  342768 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:58:03.116359  342768 cni.go:84] Creating CNI manager for ""
	I1101 09:58:03.116411  342768 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1101 09:58:03.116461  342768 start.go:353] cluster config:
	{Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:03.119656  342768 out.go:179] * Starting "ha-832582" primary control-plane node in "ha-832582" cluster
	I1101 09:58:03.122400  342768 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:58:03.125294  342768 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:58:03.128178  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:03.128237  342768 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:58:03.128250  342768 cache.go:59] Caching tarball of preloaded images
	I1101 09:58:03.128253  342768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:58:03.128348  342768 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:58:03.128359  342768 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:58:03.128499  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:03.147945  342768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:58:03.147967  342768 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:58:03.147995  342768 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:58:03.148022  342768 start.go:360] acquireMachinesLock for ha-832582: {Name:mk797b578da0c53fbacfede5c9484035101b2ded Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:58:03.148089  342768 start.go:364] duration metric: took 45.35µs to acquireMachinesLock for "ha-832582"
	I1101 09:58:03.148111  342768 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:58:03.148119  342768 fix.go:54] fixHost starting: 
	I1101 09:58:03.148373  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:03.165181  342768 fix.go:112] recreateIfNeeded on ha-832582: state=Stopped err=<nil>
	W1101 09:58:03.165215  342768 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:58:03.168512  342768 out.go:252] * Restarting existing docker container for "ha-832582" ...
	I1101 09:58:03.168595  342768 cli_runner.go:164] Run: docker start ha-832582
	I1101 09:58:03.407252  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:03.433226  342768 kic.go:430] container "ha-832582" state is running.
	I1101 09:58:03.433643  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:03.456608  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:03.456845  342768 machine.go:94] provisionDockerMachine start ...
	I1101 09:58:03.456903  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:03.480040  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:03.480367  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:03.480376  342768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:58:03.480952  342768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60776->127.0.0.1:33199: read: connection reset by peer
	I1101 09:58:06.633155  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582
	
	I1101 09:58:06.633179  342768 ubuntu.go:182] provisioning hostname "ha-832582"
	I1101 09:58:06.633238  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:06.651044  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:06.651360  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:06.651374  342768 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-832582 && echo "ha-832582" | sudo tee /etc/hostname
	I1101 09:58:06.812426  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582
	
	I1101 09:58:06.812507  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:06.832800  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:06.833109  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:06.833135  342768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832582' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832582/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832582' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:58:06.978124  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:58:06.978162  342768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:58:06.978183  342768 ubuntu.go:190] setting up certificates
	I1101 09:58:06.978200  342768 provision.go:84] configureAuth start
	I1101 09:58:06.978265  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:06.995491  342768 provision.go:143] copyHostCerts
	I1101 09:58:06.995536  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:06.995574  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:58:06.995588  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:06.995674  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:58:06.995773  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:06.995796  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:58:06.995810  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:06.995841  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:58:06.995930  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:06.995952  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:58:06.995964  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:06.995990  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:58:06.996061  342768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.ha-832582 san=[127.0.0.1 192.168.49.2 ha-832582 localhost minikube]
	I1101 09:58:07.519067  342768 provision.go:177] copyRemoteCerts
	I1101 09:58:07.519138  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:58:07.519200  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:07.536957  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:07.642333  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:58:07.642391  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:58:07.660960  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:58:07.661018  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:58:07.677785  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:58:07.677843  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1101 09:58:07.694547  342768 provision.go:87] duration metric: took 716.319917ms to configureAuth
	I1101 09:58:07.694583  342768 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:58:07.694801  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:07.694909  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:07.712779  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:07.713093  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1101 09:58:07.713114  342768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:58:08.052242  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:58:08.052306  342768 machine.go:97] duration metric: took 4.595450733s to provisionDockerMachine
	I1101 09:58:08.052334  342768 start.go:293] postStartSetup for "ha-832582" (driver="docker")
	I1101 09:58:08.052361  342768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:58:08.052459  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:58:08.052536  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.073358  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.177812  342768 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:58:08.181279  342768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:58:08.181304  342768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:58:08.181314  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:58:08.181367  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:58:08.181443  342768 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:58:08.181461  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 09:58:08.181557  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:58:08.189009  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:08.205960  342768 start.go:296] duration metric: took 153.59516ms for postStartSetup
	I1101 09:58:08.206069  342768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:58:08.206130  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.222745  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.322878  342768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:58:08.327536  342768 fix.go:56] duration metric: took 5.179409798s for fixHost
	I1101 09:58:08.327559  342768 start.go:83] releasing machines lock for "ha-832582", held for 5.179459334s
	I1101 09:58:08.327648  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:58:08.343793  342768 ssh_runner.go:195] Run: cat /version.json
	I1101 09:58:08.343844  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.344088  342768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:58:08.344140  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:58:08.362917  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.364182  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:58:08.559877  342768 ssh_runner.go:195] Run: systemctl --version
	I1101 09:58:08.566123  342768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:58:08.601278  342768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:58:08.606120  342768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:58:08.606226  342768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:58:08.613618  342768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:58:08.613639  342768 start.go:496] detecting cgroup driver to use...
	I1101 09:58:08.613670  342768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:58:08.613775  342768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:58:08.628944  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:58:08.641906  342768 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:58:08.641985  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:58:08.657234  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:58:08.670311  342768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:58:08.776949  342768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:58:08.895687  342768 docker.go:234] disabling docker service ...
	I1101 09:58:08.895763  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:58:08.912227  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:58:08.924716  342768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:58:09.033164  342768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:58:09.152553  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:58:09.165610  342768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:58:09.180758  342768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:58:09.180842  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.190144  342768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:58:09.190223  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.199488  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.208470  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.217564  342768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:58:09.226234  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.235095  342768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.243429  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:09.252434  342768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:58:09.260020  342768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:58:09.267457  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:09.373363  342768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:58:09.495940  342768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:58:09.496021  342768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:58:09.499937  342768 start.go:564] Will wait 60s for crictl version
	I1101 09:58:09.500082  342768 ssh_runner.go:195] Run: which crictl
	I1101 09:58:09.503791  342768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:58:09.533304  342768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:58:09.533395  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:58:09.560842  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:58:09.595644  342768 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:58:09.598486  342768 cli_runner.go:164] Run: docker network inspect ha-832582 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:58:09.614798  342768 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:58:09.618883  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:58:09.629569  342768 kubeadm.go:884] updating cluster {Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:58:09.629840  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:09.629912  342768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:58:09.667936  342768 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:58:09.667962  342768 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:58:09.668023  342768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:58:09.693223  342768 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:58:09.693250  342768 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:58:09.693259  342768 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:58:09.693353  342768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-832582 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:58:09.693438  342768 ssh_runner.go:195] Run: crio config
	I1101 09:58:09.751790  342768 cni.go:84] Creating CNI manager for ""
	I1101 09:58:09.751814  342768 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1101 09:58:09.751834  342768 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:58:09.751876  342768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-832582 NodeName:ha-832582 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:58:09.752075  342768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-832582"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:58:09.752102  342768 kube-vip.go:115] generating kube-vip config ...
	I1101 09:58:09.752152  342768 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 09:58:09.764023  342768 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:58:09.764122  342768 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 09:58:09.764180  342768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:58:09.772107  342768 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:58:09.772242  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1101 09:58:09.779796  342768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1101 09:58:09.792458  342768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:58:09.805570  342768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1101 09:58:09.818435  342768 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 09:58:09.831753  342768 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 09:58:09.835442  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:58:09.845042  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:09.952431  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:58:09.969023  342768 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582 for IP: 192.168.49.2
	I1101 09:58:09.969056  342768 certs.go:195] generating shared ca certs ...
	I1101 09:58:09.969072  342768 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:09.969241  342768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:58:09.969294  342768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:58:09.969307  342768 certs.go:257] generating profile certs ...
	I1101 09:58:09.969413  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key
	I1101 09:58:09.969456  342768 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2
	I1101 09:58:09.969474  342768 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1101 09:58:10.972603  342768 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 ...
	I1101 09:58:10.972640  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2: {Name:mka954bd27ed170438bba591673547458d094ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:10.972825  342768 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2 ...
	I1101 09:58:10.972842  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2: {Name:mk1061e2154b96baf6cb0ecee80a8eda645c1f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:10.972926  342768 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt.fb6819d2 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt
	I1101 09:58:10.973062  342768 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.fb6819d2 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key
	I1101 09:58:10.973204  342768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key
	I1101 09:58:10.973222  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:58:10.973238  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:58:10.973256  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:58:10.973273  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:58:10.973288  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:58:10.973300  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:58:10.973317  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:58:10.973327  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:58:10.973379  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:58:10.973412  342768 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:58:10.973425  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:58:10.973451  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:58:10.973476  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:58:10.973504  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:58:10.973552  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:10.973584  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:10.973600  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 09:58:10.973611  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 09:58:10.977021  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:58:11.008672  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:58:11.039364  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:58:11.065401  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:58:11.091095  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:58:11.131902  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:58:11.164406  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:58:11.198225  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:58:11.249652  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:58:11.275181  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:58:11.313024  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:58:11.348627  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:58:11.371097  342768 ssh_runner.go:195] Run: openssl version
	I1101 09:58:11.381650  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:58:11.392802  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.397197  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.397269  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:58:11.466322  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:58:11.480286  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:58:11.490726  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.498361  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.498428  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:58:11.561754  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:58:11.576548  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:58:11.591018  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.595330  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.595393  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:58:11.664138  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:58:11.673663  342768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:58:11.677777  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:58:11.749190  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:58:11.791873  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:58:11.837053  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:58:11.885168  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:58:11.930387  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:58:11.974056  342768 kubeadm.go:401] StartCluster: {Name:ha-832582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:58:11.974182  342768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:58:11.974253  342768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:58:12.007321  342768 cri.go:89] found id: "63f97ad5786a65d9b80ca88d289828cdda4b430f39036c771011f4f9a81dca4f"
	I1101 09:58:12.007345  342768 cri.go:89] found id: "fefab62a504e911c9eccaa75d59925b8ef3f49ca7726398893bf175da792fbb1"
	I1101 09:58:12.007351  342768 cri.go:89] found id: "73f1aa406ac05ed7ecdeab51e324661bb9e43e2bfe78738957991c966790c739"
	I1101 09:58:12.007355  342768 cri.go:89] found id: "6fabe4bc435b38aabf3b295822c18d3e9ae184e4bd65e3255404be3ea71d8088"
	I1101 09:58:12.007358  342768 cri.go:89] found id: "e24f1c760a2388d6c3baebc8169ffcb0099781302a75e8088ffb7fe0f14abe54"
	I1101 09:58:12.007362  342768 cri.go:89] found id: ""
	I1101 09:58:12.007432  342768 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:58:12.020873  342768 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:58:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:58:12.020952  342768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:58:12.030528  342768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:58:12.030550  342768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:58:12.030601  342768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:58:12.038481  342768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:58:12.038883  342768 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-832582" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:12.038992  342768 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "ha-832582" cluster setting kubeconfig missing "ha-832582" context setting]
	I1101 09:58:12.039323  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.039866  342768 kapi.go:59] client config for ha-832582: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:58:12.040348  342768 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:58:12.040368  342768 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:58:12.040374  342768 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:58:12.040379  342768 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:58:12.040387  342768 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:58:12.040718  342768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:58:12.040811  342768 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1101 09:58:12.049163  342768 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1101 09:58:12.049190  342768 kubeadm.go:602] duration metric: took 18.632637ms to restartPrimaryControlPlane
	I1101 09:58:12.049201  342768 kubeadm.go:403] duration metric: took 75.155923ms to StartCluster
	I1101 09:58:12.049217  342768 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.049278  342768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:58:12.049947  342768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:58:12.050162  342768 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:58:12.050191  342768 start.go:242] waiting for startup goroutines ...
	I1101 09:58:12.050207  342768 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:58:12.050639  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:12.054885  342768 out.go:179] * Enabled addons: 
	I1101 09:58:12.057752  342768 addons.go:515] duration metric: took 7.532576ms for enable addons: enabled=[]
	I1101 09:58:12.057799  342768 start.go:247] waiting for cluster config update ...
	I1101 09:58:12.057809  342768 start.go:256] writing updated cluster config ...
	I1101 09:58:12.061028  342768 out.go:203] 
	I1101 09:58:12.064154  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:12.064273  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.067726  342768 out.go:179] * Starting "ha-832582-m02" control-plane node in "ha-832582" cluster
	I1101 09:58:12.070608  342768 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:58:12.073579  342768 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:58:12.076459  342768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:58:12.076487  342768 cache.go:59] Caching tarball of preloaded images
	I1101 09:58:12.076589  342768 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:58:12.076605  342768 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:58:12.076732  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.076948  342768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:58:12.105644  342768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:58:12.105664  342768 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:58:12.105677  342768 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:58:12.105715  342768 start.go:360] acquireMachinesLock for ha-832582-m02: {Name:mkf85ec55e1996c34472f8191eb83bcbd97a011b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:58:12.105766  342768 start.go:364] duration metric: took 35.365µs to acquireMachinesLock for "ha-832582-m02"
	I1101 09:58:12.105795  342768 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:58:12.105801  342768 fix.go:54] fixHost starting: m02
	I1101 09:58:12.106065  342768 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:12.131724  342768 fix.go:112] recreateIfNeeded on ha-832582-m02: state=Stopped err=<nil>
	W1101 09:58:12.131753  342768 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:58:12.135018  342768 out.go:252] * Restarting existing docker container for "ha-832582-m02" ...
	I1101 09:58:12.135097  342768 cli_runner.go:164] Run: docker start ha-832582-m02
	I1101 09:58:12.536520  342768 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:12.574712  342768 kic.go:430] container "ha-832582-m02" state is running.
	I1101 09:58:12.575112  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:12.618100  342768 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/config.json ...
	I1101 09:58:12.618407  342768 machine.go:94] provisionDockerMachine start ...
	I1101 09:58:12.618487  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:12.650389  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:12.650705  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:12.650715  342768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:58:12.651605  342768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 09:58:15.933915  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582-m02
	
	I1101 09:58:15.933941  342768 ubuntu.go:182] provisioning hostname "ha-832582-m02"
	I1101 09:58:15.934014  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:15.987460  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:15.987772  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:15.987789  342768 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-832582-m02 && echo "ha-832582-m02" | sudo tee /etc/hostname
	I1101 09:58:16.314408  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-832582-m02
	
	I1101 09:58:16.314487  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:16.343626  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:16.343927  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:16.343944  342768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-832582-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-832582-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-832582-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:58:16.593142  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:58:16.593167  342768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 09:58:16.593184  342768 ubuntu.go:190] setting up certificates
	I1101 09:58:16.593195  342768 provision.go:84] configureAuth start
	I1101 09:58:16.593253  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:16.650326  342768 provision.go:143] copyHostCerts
	I1101 09:58:16.650367  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:16.650399  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 09:58:16.650411  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 09:58:16.650486  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 09:58:16.650567  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:16.650589  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 09:58:16.650600  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 09:58:16.650629  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 09:58:16.650674  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:16.650695  342768 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 09:58:16.650703  342768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 09:58:16.650730  342768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 09:58:16.650781  342768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.ha-832582-m02 san=[127.0.0.1 192.168.49.3 ha-832582-m02 localhost minikube]
	I1101 09:58:16.783662  342768 provision.go:177] copyRemoteCerts
	I1101 09:58:16.783792  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:58:16.783869  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:16.825898  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:17.012062  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:58:17.012132  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:58:17.068319  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:58:17.068382  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:58:17.096494  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:58:17.096557  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:58:17.127552  342768 provision.go:87] duration metric: took 534.343053ms to configureAuth
	I1101 09:58:17.127579  342768 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:58:17.127812  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:17.127918  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.173337  342768 main.go:143] libmachine: Using SSH client type: native
	I1101 09:58:17.173640  342768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1101 09:58:17.173660  342768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:58:17.742511  342768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:58:17.742535  342768 machine.go:97] duration metric: took 5.124117974s to provisionDockerMachine
	I1101 09:58:17.742546  342768 start.go:293] postStartSetup for "ha-832582-m02" (driver="docker")
	I1101 09:58:17.742557  342768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:58:17.742620  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:58:17.742669  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.776626  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:17.903612  342768 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:58:17.910004  342768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:58:17.910040  342768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:58:17.910051  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 09:58:17.910106  342768 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 09:58:17.910182  342768 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 09:58:17.910189  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 09:58:17.910287  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:58:17.921230  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:58:17.949919  342768 start.go:296] duration metric: took 207.358478ms for postStartSetup
	I1101 09:58:17.949998  342768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:58:17.950043  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:17.975141  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.101002  342768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:58:18.109231  342768 fix.go:56] duration metric: took 6.003422355s for fixHost
	I1101 09:58:18.109298  342768 start.go:83] releasing machines lock for "ha-832582-m02", held for 6.003516649s
	I1101 09:58:18.109404  342768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m02
	I1101 09:58:18.137736  342768 out.go:179] * Found network options:
	I1101 09:58:18.140766  342768 out.go:179]   - NO_PROXY=192.168.49.2
	W1101 09:58:18.143721  342768 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 09:58:18.143760  342768 proxy.go:120] fail to check proxy env: Error ip not in block
	I1101 09:58:18.143834  342768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:58:18.143887  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:18.144157  342768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:58:18.144209  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m02
	I1101 09:58:18.176200  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.181012  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m02/id_rsa Username:docker}
	I1101 09:58:18.454952  342768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:58:18.579173  342768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:58:18.579289  342768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:58:18.623083  342768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:58:18.623169  342768 start.go:496] detecting cgroup driver to use...
	I1101 09:58:18.623227  342768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:58:18.623296  342768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:58:18.686246  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:58:18.715168  342768 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:58:18.715306  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:58:18.776969  342768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:58:18.820029  342768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:58:19.203132  342768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:58:19.545263  342768 docker.go:234] disabling docker service ...
	I1101 09:58:19.545377  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:58:19.611975  342768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:58:19.661375  342768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:58:19.968591  342768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:58:20.322030  342768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:58:20.377246  342768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:58:20.428021  342768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:58:20.428136  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.448333  342768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:58:20.448440  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.494239  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.509954  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.531043  342768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:58:20.546562  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.575054  342768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.599209  342768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:58:20.627200  342768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:58:20.650938  342768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:58:20.674283  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:58:21.004512  342768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:59:51.327238  342768 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.322673918s)
	I1101 09:59:51.327311  342768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:59:51.327492  342768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:59:51.332862  342768 start.go:564] Will wait 60s for crictl version
	I1101 09:59:51.332922  342768 ssh_runner.go:195] Run: which crictl
	I1101 09:59:51.336719  342768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:59:51.365406  342768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:59:51.365490  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:59:51.395065  342768 ssh_runner.go:195] Run: crio --version
	I1101 09:59:51.426575  342768 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:59:51.429610  342768 out.go:179]   - env NO_PROXY=192.168.49.2
	I1101 09:59:51.432670  342768 cli_runner.go:164] Run: docker network inspect ha-832582 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:59:51.449128  342768 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:59:51.452943  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:59:51.462372  342768 mustload.go:66] Loading cluster: ha-832582
	I1101 09:59:51.462608  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:59:51.462862  342768 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:59:51.484169  342768 host.go:66] Checking if "ha-832582" exists ...
	I1101 09:59:51.484451  342768 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582 for IP: 192.168.49.3
	I1101 09:59:51.484466  342768 certs.go:195] generating shared ca certs ...
	I1101 09:59:51.484481  342768 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:59:51.484596  342768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 09:59:51.484637  342768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 09:59:51.484647  342768 certs.go:257] generating profile certs ...
	I1101 09:59:51.484720  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key
	I1101 09:59:51.484783  342768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key.cfdf3314
	I1101 09:59:51.484827  342768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key
	I1101 09:59:51.484840  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:59:51.484853  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:59:51.484872  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:59:51.484886  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:59:51.484897  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:59:51.484912  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:59:51.484928  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:59:51.484939  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:59:51.485004  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 09:59:51.485035  342768 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 09:59:51.485049  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:59:51.485072  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:59:51.485099  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:59:51.485122  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 09:59:51.485167  342768 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 09:59:51.485197  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 09:59:51.485216  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:51.485231  342768 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 09:59:51.485289  342768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:59:51.505623  342768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:59:51.602013  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1101 09:59:51.606013  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1101 09:59:51.614285  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1101 09:59:51.617662  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1101 09:59:51.626190  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1101 09:59:51.629806  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1101 09:59:51.638050  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1101 09:59:51.641429  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1101 09:59:51.649504  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1101 09:59:51.653190  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1101 09:59:51.662675  342768 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1101 09:59:51.666366  342768 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1101 09:59:51.675666  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:59:51.694409  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:59:51.714284  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:59:51.733851  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:59:51.752947  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:59:51.773341  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:59:51.792083  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:59:51.810450  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:59:51.829646  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 09:59:51.849065  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:59:51.868827  342768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 09:59:51.891330  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1101 09:59:51.904911  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1101 09:59:51.918898  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1101 09:59:51.934197  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1101 09:59:51.948234  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1101 09:59:51.960997  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1101 09:59:51.975251  342768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1101 09:59:51.989442  342768 ssh_runner.go:195] Run: openssl version
	I1101 09:59:51.996139  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 09:59:52.006856  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.011576  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.011690  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 09:59:52.052830  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:59:52.061006  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:59:52.069890  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.074806  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.074872  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:59:52.121631  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:59:52.130945  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 09:59:52.140732  342768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.145152  342768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.145254  342768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 09:59:52.189261  342768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 09:59:52.197284  342768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:59:52.201018  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:59:52.244640  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:59:52.291107  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:59:52.333098  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:59:52.374947  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:59:52.416040  342768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:59:52.458067  342768 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1101 09:59:52.458177  342768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-832582-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-832582 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:59:52.458207  342768 kube-vip.go:115] generating kube-vip config ...
	I1101 09:59:52.458257  342768 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 09:59:52.471027  342768 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:59:52.471117  342768 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 09:59:52.471214  342768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:59:52.479864  342768 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:59:52.479956  342768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1101 09:59:52.488040  342768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:59:52.502060  342768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:59:52.516164  342768 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 09:59:52.531779  342768 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 09:59:52.535746  342768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:59:52.545530  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:59:52.680054  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:59:52.695591  342768 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:59:52.696046  342768 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:59:52.701457  342768 out.go:179] * Verifying Kubernetes components...
	I1101 09:59:52.704242  342768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:59:52.825960  342768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:59:52.841449  342768 kapi.go:59] client config for ha-832582: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/ha-832582/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1101 09:59:52.841519  342768 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1101 09:59:52.841815  342768 node_ready.go:35] waiting up to 6m0s for node "ha-832582-m02" to be "Ready" ...
	I1101 10:00:24.926942  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:00:24.927351  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1101 10:00:27.343326  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:29.843264  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:32.343360  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:34.843237  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:00:36.843314  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1101 10:01:43.899271  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:01:43.899642  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:55716->192.168.49.2:8443: read: connection reset by peer
	W1101 10:01:46.343035  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:48.842515  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:51.342428  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:53.843341  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:56.342335  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:01:58.343338  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:00.842815  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:02.843269  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:05.343114  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:07.343295  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:09.343359  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:02:11.843295  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1101 10:03:17.100795  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:03:17.101130  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37558->192.168.49.2:8443: read: connection reset by peer
	W1101 10:03:19.343251  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:21.843314  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:24.343238  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:26.842444  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:28.843273  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:31.343229  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:33.842318  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:35.842369  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:37.843231  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:39.843286  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:42.342431  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:44.842376  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:46.843230  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:49.343299  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:51.843196  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:03:54.342397  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:06.345951  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	W1101 10:04:16.346594  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	I1101 10:04:18.761391  342768 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02"
	W1101 10:04:18.761797  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:55754->192.168.49.2:8443: read: connection reset by peer
	W1101 10:04:20.842430  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:22.842572  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:24.843325  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:27.343297  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:29.842340  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:32.342396  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:34.343290  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:36.843297  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:39.342353  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:41.343002  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:43.842379  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:45.843287  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:48.343254  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:50.343337  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:52.842301  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:54.843202  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:57.343277  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:04:59.843343  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:01.843430  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:04.342377  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:06.343265  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:08.843265  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:11.342401  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:13.842472  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:15.843291  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:18.343216  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:20.343304  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:22.843202  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:25.342703  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:27.343208  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:29.842303  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:31.843204  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:34.342391  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:36.343286  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:38.842462  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1101 10:05:50.343480  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": net/http: TLS handshake timeout
	W1101 10:05:52.842736  342768 node_ready.go:55] error getting node "ha-832582-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-832582-m02": context deadline exceeded
	I1101 10:05:52.842774  342768 node_ready.go:38] duration metric: took 6m0.000936091s for node "ha-832582-m02" to be "Ready" ...
	I1101 10:05:52.846340  342768 out.go:203] 
	W1101 10:05:52.849403  342768 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1101 10:05:52.849424  342768 out.go:285] * 
	W1101 10:05:52.851598  342768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:05:52.854797  342768 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.211892535Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6d81d35d-5e3a-4a0d-95c7-fd4ce3862a7b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.212989865Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=7d9342d8-5209-4633-ada8-79262e11ab03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.213090913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.218756833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.219359632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.239436241Z" level=info msg="Created container ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=7d9342d8-5209-4633-ada8-79262e11ab03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.240120305Z" level=info msg="Starting container: ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112" id=5059d73f-d026-48cc-ab1b-20755ae53f09 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:05:26 ha-832582 crio[666]: time="2025-11-01T10:05:26.24397708Z" level=info msg="Started container" PID=1243 containerID=ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112 description=kube-system/kube-controller-manager-ha-832582/kube-controller-manager id=5059d73f-d026-48cc-ab1b-20755ae53f09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f8bb27411a46d477c2d6c99cd3320cc05020176d2346c660a30b294ab654fd6
	Nov 01 10:05:37 ha-832582 conmon[1241]: conmon ebb69e2d4cc0850778e8 <ninfo>: container 1243 exited with status 1
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.311325101Z" level=info msg="Removing container: 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.320548328Z" level=info msg="Error loading conmon cgroup of container 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289: cgroup deleted" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:37 ha-832582 crio[666]: time="2025-11-01T10:05:37.3238441Z" level=info msg="Removed container 5dd09765fc1f45308dc1ee4ffcf1117785697d24a7075818ce49cf33aefeb289: kube-system/kube-controller-manager-ha-832582/kube-controller-manager" id=3eadb443-d77a-4f35-8cd0-ab617d092326 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.209911635Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3dba77e3-5193-4cb7-857b-77c03b8eec61 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.214760967Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d11bade9-75dd-4891-a3ac-8b6ec0818fea name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.217346599Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=ddd6e3be-671f-440e-8995-91a3f805c68e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.217457231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.222294082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.222766582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.241766495Z" level=info msg="Created container c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=ddd6e3be-671f-440e-8995-91a3f805c68e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.242395494Z" level=info msg="Starting container: c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961" id=75c58720-b050-4290-a4bd-8b44e55c7a3a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:05:40 ha-832582 crio[666]: time="2025-11-01T10:05:40.245675357Z" level=info msg="Started container" PID=1257 containerID=c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961 description=kube-system/kube-apiserver-ha-832582/kube-apiserver id=75c58720-b050-4290-a4bd-8b44e55c7a3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=04c614211235f3aea840ff0ef3962ce76f51fc82f70daa74b0ed9c0b2a0f7f66
	Nov 01 10:06:00 ha-832582 conmon[1255]: conmon c883cef2aa1b7c987d02 <ninfo>: container 1257 exited with status 255
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.37364516Z" level=info msg="Removing container: 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.380903964Z" level=info msg="Error loading conmon cgroup of container 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b: cgroup deleted" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:06:01 ha-832582 crio[666]: time="2025-11-01T10:06:01.383910222Z" level=info msg="Removed container 025927d71386846664ca51f5cb53b79e63c60aaa0c20929a5258ca066b77bb2b: kube-system/kube-apiserver-ha-832582/kube-apiserver" id=c1982ced-ec52-421e-af31-8145603ed279 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	c883cef2aa1b7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   37 seconds ago      Exited              kube-apiserver            8                   04c614211235f       kube-apiserver-ha-832582            kube-system
	ebb69e2d4cc08       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   51 seconds ago      Exited              kube-controller-manager   9                   4f8bb27411a46       kube-controller-manager-ha-832582   kube-system
	e5bbf60599882       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago       Running             etcd                      3                   51ff665c16f3c       etcd-ha-832582                      kube-system
	fefab62a504e9       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  2                   adcb5b1f5a762       kube-vip-ha-832582                  kube-system
	6fabe4bc435b3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            2                   c588a4af8fecc       kube-scheduler-ha-832582            kube-system
	e24f1c760a238       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Exited              etcd                      2                   51ff665c16f3c       etcd-ha-832582                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014572] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501039] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033197] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753566] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.779214] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:03] hrtimer: interrupt took 8309137 ns
	[Nov 1 09:28] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[  +0.061702] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 1 09:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:36] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:50] overlayfs: idmapped layers are currently not supported
	[ +32.089424] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:53] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:55] overlayfs: idmapped layers are currently not supported
	[  +4.195210] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:58] overlayfs: idmapped layers are currently not supported
	[  +4.848874] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e24f1c760a2388d6c3baebc8169ffcb0099781302a75e8088ffb7fe0f14abe54] <==
	{"level":"info","ts":"2025-11-01T10:03:28.368864Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:03:28.368907Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"ha-832582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:03:28.368997Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:03:28.370564Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:03:28.370635Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370653Z","caller":"etcdserver/server.go:1272","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-11-01T10:03:28.370677Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:03:28.370679Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T10:03:28.370784Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370801Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370832Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:03:28.370842Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370825Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.370915Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370878Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:03:28.370990Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:03:28.371010Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.370965Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371030Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371047Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.371056Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"3c3ae81873ee7e73"}
	{"level":"info","ts":"2025-11-01T10:03:28.374519Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T10:03:28.374595Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:03:28.374658Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T10:03:28.374686Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-832582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [e5bbf60599882a44b7077046577e6c6d255753632f3ad97ed0e3d65eb2697937] <==
	{"level":"info","ts":"2025-11-01T10:06:14.659321Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:14.659332Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:14.864866Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-01T10:06:15.365666Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-01T10:06:15.758907Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:15.758969Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:15.758993Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:15.759026Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:15.759037Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:15.866744Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-01T10:06:16.367180Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-01T10:06:16.596004Z","caller":"etcdserver/server.go:1814","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-832582 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"info","ts":"2025-11-01T10:06:16.858875Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:16.858934Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:16.858958Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:16.858998Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:16.859010Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-01T10:06:16.867622Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-01T10:06:17.368447Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-01T10:06:17.869368Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041022320782887,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-01T10:06:17.958698Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:17.958749Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:17.958772Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 5, index: 2938] sent MsgPreVote request to 3c3ae81873ee7e73 at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:17.958801Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-11-01T10:06:17.958812Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	
	
	==> kernel <==
	 10:06:18 up  1:48,  0 user,  load average: 0.49, 0.90, 1.44
	Linux ha-832582 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [c883cef2aa1b7c987d023c31f9deb5c45f89c642f182d7bdcd653c84080b1961] <==
	I1101 10:05:40.306392       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1101 10:05:40.853033       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1101 10:05:40.853065       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1101 10:05:40.853075       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1101 10:05:40.853080       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1101 10:05:40.853085       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1101 10:05:40.853089       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1101 10:05:40.853093       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1101 10:05:40.853097       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1101 10:05:40.853101       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1101 10:05:40.853106       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1101 10:05:40.853110       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1101 10:05:40.853114       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1101 10:05:40.870762       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:05:40.872294       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1101 10:05:40.872930       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1101 10:05:40.879616       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:05:40.890179       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1101 10:05:40.890287       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1101 10:05:40.890929       1 instance.go:239] Using reconciler: lease
	W1101 10:05:40.892474       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:06:00.869430       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:06:00.872570       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W1101 10:06:00.892234       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1101 10:06:00.892232       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [ebb69e2d4cc0850778e8b0bb6a69da42f6cf05b723b234607269332bef740112] <==
	I1101 10:05:26.730710       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:05:27.221967       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1101 10:05:27.222053       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:05:27.223635       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 10:05:27.223814       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 10:05:27.224036       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1101 10:05:27.224086       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 10:05:37.225354       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [6fabe4bc435b38aabf3b295822c18d3e9ae184e4bd65e3255404be3ea71d8088] <==
	E1101 10:05:24.146896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:05:28.850014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:05:29.568563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:05:31.156997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:05:32.075760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:05:34.876970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:05:36.541398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:05:36.855814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:05:51.115287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:05:52.948469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:05:56.934437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:06:01.899981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50632->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:06:01.900101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50552->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:06:01.900186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50560->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:06:01.900279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50606->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:06:01.900365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50620->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:06:01.900449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50654->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:06:01.900469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50592->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:06:02.196959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:06:03.029583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:06:05.944882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:06:10.499860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:06:13.307117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:06:13.410235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:06:17.459041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	
	
	==> kubelet <==
	Nov 01 10:06:15 ha-832582 kubelet[802]: E1101 10:06:15.956325     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.057844     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.159368     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.259908     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.361363     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.462197     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.563049     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.664193     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.766180     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.867178     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:16 ha-832582 kubelet[802]: E1101 10:06:16.968279     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:17 ha-832582 kubelet[802]: E1101 10:06:17.069569     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:17 ha-832582 kubelet[802]: E1101 10:06:17.170721     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:17 ha-832582 kubelet[802]: E1101 10:06:17.272133     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:17 ha-832582 kubelet[802]: E1101 10:06:17.373333     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:17 ha-832582 kubelet[802]: E1101 10:06:17.474342     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:17 ha-832582 kubelet[802]: E1101 10:06:17.575063     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:17 ha-832582 kubelet[802]: E1101 10:06:17.676426     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:17 ha-832582 kubelet[802]: E1101 10:06:17.777651     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:17 ha-832582 kubelet[802]: E1101 10:06:17.879025     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:17 ha-832582 kubelet[802]: E1101 10:06:17.979787     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:18 ha-832582 kubelet[802]: E1101 10:06:18.081219     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:18 ha-832582 kubelet[802]: E1101 10:06:18.182544     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:18 ha-832582 kubelet[802]: E1101 10:06:18.283334     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 01 10:06:18 ha-832582 kubelet[802]: E1101 10:06:18.384384     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-832582\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-832582 -n ha-832582
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-832582 -n ha-832582: exit status 2 (329.613153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-832582" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.23s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-263903 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-263903 --output=json --user=testUser: exit status 80 (1.63486003s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7e986eea-7cdf-496c-8ec4-d01ce3e0dcc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-263903 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"956f5674-cd6e-4fde-b33c-292f3548aac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T10:07:48Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"f6f8975d-c4c0-451b-8cea-5e90823bcd50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-263903 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-263903 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-263903 --output=json --user=testUser: exit status 80 (2.045524341s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fdc4d5b0-464a-41bd-9acf-578962e1f32d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-263903 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d739da66-70af-438e-b6b0-58ca611d2272","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T10:07:50Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"d04ca74c-f723-4908-a1de-6cc4f68b573b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-263903 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.05s)

                                                
                                    
x
+
TestPause/serial/Pause (8.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-197523 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-197523 --alsologtostderr -v=5: exit status 80 (2.62446398s)

                                                
                                                
-- stdout --
	* Pausing node pause-197523 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:30:06.866252  445857 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:30:06.867569  445857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:30:06.867615  445857 out.go:374] Setting ErrFile to fd 2...
	I1101 10:30:06.867639  445857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:30:06.867954  445857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:30:06.868278  445857 out.go:368] Setting JSON to false
	I1101 10:30:06.868332  445857 mustload.go:66] Loading cluster: pause-197523
	I1101 10:30:06.868998  445857 config.go:182] Loaded profile config "pause-197523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:30:06.870030  445857 cli_runner.go:164] Run: docker container inspect pause-197523 --format={{.State.Status}}
	I1101 10:30:06.891255  445857 host.go:66] Checking if "pause-197523" exists ...
	I1101 10:30:06.891564  445857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:30:06.965206  445857 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:30:06.953383697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:30:06.966011  445857 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-197523 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:30:06.969399  445857 out.go:179] * Pausing node pause-197523 ... 
	I1101 10:30:06.972219  445857 host.go:66] Checking if "pause-197523" exists ...
	I1101 10:30:06.972560  445857 ssh_runner.go:195] Run: systemctl --version
	I1101 10:30:06.972613  445857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:30:06.988981  445857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:30:07.117478  445857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:30:07.145096  445857 pause.go:52] kubelet running: true
	I1101 10:30:07.145245  445857 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:30:07.461890  445857 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:30:07.461981  445857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:30:07.606419  445857 cri.go:89] found id: "4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a"
	I1101 10:30:07.606450  445857 cri.go:89] found id: "3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84"
	I1101 10:30:07.606455  445857 cri.go:89] found id: "b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7"
	I1101 10:30:07.606459  445857 cri.go:89] found id: "6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a"
	I1101 10:30:07.606463  445857 cri.go:89] found id: "c46b8aaeffa0082e965926a54cd85d2e052f19357bd88395e1bc98be5fa281f6"
	I1101 10:30:07.606466  445857 cri.go:89] found id: "87b9897087e6aaa64c721ab5ef446d1366a01bc265a5a4b3cdb2f51049e586ed"
	I1101 10:30:07.606470  445857 cri.go:89] found id: "d28a5938aa1092bb3305ae498633bf03b37fe8e68dcfe4b02fc20e42488fa9e4"
	I1101 10:30:07.606473  445857 cri.go:89] found id: "b76464b1416c8abe45c0967675f8a27c2908d2e8954a5595fd5cb5ed2329b506"
	I1101 10:30:07.606476  445857 cri.go:89] found id: "99e565cbd3b72a17fc891167c8a103997c60c46e217825056d511a99adc06362"
	I1101 10:30:07.606482  445857 cri.go:89] found id: "da788d7cea8ef8b74ba9aeddc734c4a58a0f8c301196a24317a0eebde5147eb2"
	I1101 10:30:07.606486  445857 cri.go:89] found id: "6c5a2fe54c508b435413ed345062b1d2aa084495afa6dda84e231a17054c1e31"
	I1101 10:30:07.606489  445857 cri.go:89] found id: "44db24a24cd979ca63b954e45e8c420af6e0dcf26da14d8102f7a645f5ef8c01"
	I1101 10:30:07.606493  445857 cri.go:89] found id: "7149d740a36107a476b99d86dc97bfbc2aa105f71c9a1ca2d72cc7dc8b2a5447"
	I1101 10:30:07.606496  445857 cri.go:89] found id: "4742f77b740db06e44bd84780999256c66d075efa0d5a0ffb535c8d55a421cf3"
	I1101 10:30:07.606499  445857 cri.go:89] found id: ""
	I1101 10:30:07.606548  445857 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:30:07.621817  445857 retry.go:31] will retry after 294.830685ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:30:07Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:30:07.917310  445857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:30:07.938395  445857 pause.go:52] kubelet running: false
	I1101 10:30:07.938475  445857 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:30:08.145823  445857 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:30:08.145939  445857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:30:08.242668  445857 cri.go:89] found id: "4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a"
	I1101 10:30:08.242702  445857 cri.go:89] found id: "3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84"
	I1101 10:30:08.242708  445857 cri.go:89] found id: "b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7"
	I1101 10:30:08.242712  445857 cri.go:89] found id: "6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a"
	I1101 10:30:08.242715  445857 cri.go:89] found id: "c46b8aaeffa0082e965926a54cd85d2e052f19357bd88395e1bc98be5fa281f6"
	I1101 10:30:08.242739  445857 cri.go:89] found id: "87b9897087e6aaa64c721ab5ef446d1366a01bc265a5a4b3cdb2f51049e586ed"
	I1101 10:30:08.242751  445857 cri.go:89] found id: "d28a5938aa1092bb3305ae498633bf03b37fe8e68dcfe4b02fc20e42488fa9e4"
	I1101 10:30:08.242754  445857 cri.go:89] found id: "b76464b1416c8abe45c0967675f8a27c2908d2e8954a5595fd5cb5ed2329b506"
	I1101 10:30:08.242757  445857 cri.go:89] found id: "99e565cbd3b72a17fc891167c8a103997c60c46e217825056d511a99adc06362"
	I1101 10:30:08.242788  445857 cri.go:89] found id: "da788d7cea8ef8b74ba9aeddc734c4a58a0f8c301196a24317a0eebde5147eb2"
	I1101 10:30:08.242799  445857 cri.go:89] found id: "6c5a2fe54c508b435413ed345062b1d2aa084495afa6dda84e231a17054c1e31"
	I1101 10:30:08.242802  445857 cri.go:89] found id: "44db24a24cd979ca63b954e45e8c420af6e0dcf26da14d8102f7a645f5ef8c01"
	I1101 10:30:08.242806  445857 cri.go:89] found id: "7149d740a36107a476b99d86dc97bfbc2aa105f71c9a1ca2d72cc7dc8b2a5447"
	I1101 10:30:08.242821  445857 cri.go:89] found id: "4742f77b740db06e44bd84780999256c66d075efa0d5a0ffb535c8d55a421cf3"
	I1101 10:30:08.242831  445857 cri.go:89] found id: ""
	I1101 10:30:08.242895  445857 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:30:08.254512  445857 retry.go:31] will retry after 336.230946ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:30:08Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:30:08.590981  445857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:30:08.610222  445857 pause.go:52] kubelet running: false
	I1101 10:30:08.610352  445857 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:30:08.903253  445857 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:30:08.903380  445857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:30:09.041672  445857 cri.go:89] found id: "4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a"
	I1101 10:30:09.041708  445857 cri.go:89] found id: "3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84"
	I1101 10:30:09.041713  445857 cri.go:89] found id: "b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7"
	I1101 10:30:09.041717  445857 cri.go:89] found id: "6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a"
	I1101 10:30:09.041720  445857 cri.go:89] found id: "c46b8aaeffa0082e965926a54cd85d2e052f19357bd88395e1bc98be5fa281f6"
	I1101 10:30:09.041723  445857 cri.go:89] found id: "87b9897087e6aaa64c721ab5ef446d1366a01bc265a5a4b3cdb2f51049e586ed"
	I1101 10:30:09.041727  445857 cri.go:89] found id: "d28a5938aa1092bb3305ae498633bf03b37fe8e68dcfe4b02fc20e42488fa9e4"
	I1101 10:30:09.041729  445857 cri.go:89] found id: "b76464b1416c8abe45c0967675f8a27c2908d2e8954a5595fd5cb5ed2329b506"
	I1101 10:30:09.041733  445857 cri.go:89] found id: "99e565cbd3b72a17fc891167c8a103997c60c46e217825056d511a99adc06362"
	I1101 10:30:09.041739  445857 cri.go:89] found id: "da788d7cea8ef8b74ba9aeddc734c4a58a0f8c301196a24317a0eebde5147eb2"
	I1101 10:30:09.041743  445857 cri.go:89] found id: "6c5a2fe54c508b435413ed345062b1d2aa084495afa6dda84e231a17054c1e31"
	I1101 10:30:09.041746  445857 cri.go:89] found id: "44db24a24cd979ca63b954e45e8c420af6e0dcf26da14d8102f7a645f5ef8c01"
	I1101 10:30:09.041749  445857 cri.go:89] found id: "7149d740a36107a476b99d86dc97bfbc2aa105f71c9a1ca2d72cc7dc8b2a5447"
	I1101 10:30:09.041759  445857 cri.go:89] found id: "4742f77b740db06e44bd84780999256c66d075efa0d5a0ffb535c8d55a421cf3"
	I1101 10:30:09.041766  445857 cri.go:89] found id: ""
	I1101 10:30:09.041816  445857 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:30:09.061089  445857 out.go:203] 
	W1101 10:30:09.064007  445857 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:30:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:30:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:30:09.064034  445857 out.go:285] * 
	* 
	W1101 10:30:09.403034  445857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:30:09.408139  445857 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-197523 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-197523
helpers_test.go:243: (dbg) docker inspect pause-197523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107",
	        "Created": "2025-11-01T10:28:20.242203934Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 438917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:28:20.32920004Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107/hostname",
	        "HostsPath": "/var/lib/docker/containers/9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107/hosts",
	        "LogPath": "/var/lib/docker/containers/9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107/9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107-json.log",
	        "Name": "/pause-197523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-197523:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-197523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107",
	                "LowerDir": "/var/lib/docker/overlay2/85a005fc744cdaba46ed7b46e843b12ca411e304702702e29321d1ef27c39608-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85a005fc744cdaba46ed7b46e843b12ca411e304702702e29321d1ef27c39608/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85a005fc744cdaba46ed7b46e843b12ca411e304702702e29321d1ef27c39608/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85a005fc744cdaba46ed7b46e843b12ca411e304702702e29321d1ef27c39608/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-197523",
	                "Source": "/var/lib/docker/volumes/pause-197523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-197523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-197523",
	                "name.minikube.sigs.k8s.io": "pause-197523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "26224c3703e50552df403e8123027b5ce5cc80e7bebdddbac6e19889c12769fe",
	            "SandboxKey": "/var/run/docker/netns/26224c3703e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-197523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:28:59:05:1c:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f41f9d3ed4581b14e0c3dce3ce74200150d668805cc1e4da30ba9f5353e7a79e",
	                    "EndpointID": "2159e744649b89aa3ad921e037b9cf65ef10756ed2dfcb314e2b231ec57bdbbb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-197523",
	                        "9adb21525266"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-197523 -n pause-197523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-197523 -n pause-197523: exit status 2 (500.376919ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-197523 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-197523 logs -n 25: (1.853296688s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p NoKubernetes-180480                                                                                                                   │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │ 01 Nov 25 10:25 UTC │
	│ start   │ -p NoKubernetes-180480 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │ 01 Nov 25 10:25 UTC │
	│ delete  │ -p missing-upgrade-843745                                                                                                                │ missing-upgrade-843745    │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │ 01 Nov 25 10:25 UTC │
	│ ssh     │ -p NoKubernetes-180480 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │                     │
	│ start   │ -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │ 01 Nov 25 10:26 UTC │
	│ stop    │ -p NoKubernetes-180480                                                                                                                   │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ start   │ -p NoKubernetes-180480 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ ssh     │ -p NoKubernetes-180480 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │                     │
	│ delete  │ -p NoKubernetes-180480                                                                                                                   │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ start   │ -p stopped-upgrade-261821 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-261821    │ jenkins │ v1.32.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ stop    │ -p kubernetes-upgrade-683031                                                                                                             │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ start   │ -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:28 UTC │
	│ stop    │ stopped-upgrade-261821 stop                                                                                                              │ stopped-upgrade-261821    │ jenkins │ v1.32.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ start   │ -p stopped-upgrade-261821 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-261821    │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:27 UTC │
	│ delete  │ -p stopped-upgrade-261821                                                                                                                │ stopped-upgrade-261821    │ jenkins │ v1.37.0 │ 01 Nov 25 10:27 UTC │ 01 Nov 25 10:27 UTC │
	│ start   │ -p running-upgrade-645343 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-645343    │ jenkins │ v1.32.0 │ 01 Nov 25 10:27 UTC │ 01 Nov 25 10:27 UTC │
	│ start   │ -p running-upgrade-645343 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-645343    │ jenkins │ v1.37.0 │ 01 Nov 25 10:27 UTC │ 01 Nov 25 10:28 UTC │
	│ delete  │ -p running-upgrade-645343                                                                                                                │ running-upgrade-645343    │ jenkins │ v1.37.0 │ 01 Nov 25 10:28 UTC │ 01 Nov 25 10:28 UTC │
	│ start   │ -p pause-197523 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-197523              │ jenkins │ v1.37.0 │ 01 Nov 25 10:28 UTC │ 01 Nov 25 10:29 UTC │
	│ start   │ -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:28 UTC │                     │
	│ start   │ -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:28 UTC │ 01 Nov 25 10:29 UTC │
	│ delete  │ -p kubernetes-upgrade-683031                                                                                                             │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:29 UTC │ 01 Nov 25 10:29 UTC │
	│ start   │ -p force-systemd-flag-854151 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-854151 │ jenkins │ v1.37.0 │ 01 Nov 25 10:29 UTC │                     │
	│ start   │ -p pause-197523 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-197523              │ jenkins │ v1.37.0 │ 01 Nov 25 10:29 UTC │ 01 Nov 25 10:30 UTC │
	│ pause   │ -p pause-197523 --alsologtostderr -v=5                                                                                                   │ pause-197523              │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:29:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:29:39.295720  443746 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:29:39.296036  443746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:29:39.296067  443746 out.go:374] Setting ErrFile to fd 2...
	I1101 10:29:39.296087  443746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:29:39.296391  443746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:29:39.296850  443746 out.go:368] Setting JSON to false
	I1101 10:29:39.298019  443746 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7929,"bootTime":1761985051,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:29:39.298116  443746 start.go:143] virtualization:  
	I1101 10:29:39.303285  443746 out.go:179] * [pause-197523] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:29:39.307246  443746 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:29:39.307310  443746 notify.go:221] Checking for updates...
	I1101 10:29:39.314724  443746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:29:39.317237  443746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:29:39.320231  443746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:29:39.323195  443746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:29:39.326179  443746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:29:39.329561  443746 config.go:182] Loaded profile config "pause-197523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:29:39.330173  443746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:29:39.372017  443746 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:29:39.372126  443746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:29:39.477474  443746 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:64 SystemTime:2025-11-01 10:29:39.467260743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:29:39.477581  443746 docker.go:319] overlay module found
	I1101 10:29:39.480883  443746 out.go:179] * Using the docker driver based on existing profile
	I1101 10:29:39.483761  443746 start.go:309] selected driver: docker
	I1101 10:29:39.483782  443746 start.go:930] validating driver "docker" against &{Name:pause-197523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-197523 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:29:39.483920  443746 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:29:39.484028  443746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:29:39.539079  443746 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:64 SystemTime:2025-11-01 10:29:39.529987551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:29:39.539491  443746 cni.go:84] Creating CNI manager for ""
	I1101 10:29:39.539561  443746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:29:39.539609  443746 start.go:353] cluster config:
	{Name:pause-197523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-197523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:29:39.542765  443746 out.go:179] * Starting "pause-197523" primary control-plane node in "pause-197523" cluster
	I1101 10:29:39.545612  443746 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:29:39.548663  443746 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:29:39.551572  443746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:29:39.551638  443746 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:29:39.551648  443746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:29:39.551653  443746 cache.go:59] Caching tarball of preloaded images
	I1101 10:29:39.552023  443746 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:29:39.552034  443746 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:29:39.552167  443746 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/config.json ...
	I1101 10:29:39.571820  443746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:29:39.571846  443746 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:29:39.571867  443746 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:29:39.571893  443746 start.go:360] acquireMachinesLock for pause-197523: {Name:mk6d808ea7a56f48373318480031d6f0811b7ed9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:29:39.571955  443746 start.go:364] duration metric: took 37.777µs to acquireMachinesLock for "pause-197523"
	I1101 10:29:39.571979  443746 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:29:39.571985  443746 fix.go:54] fixHost starting: 
	I1101 10:29:39.572275  443746 cli_runner.go:164] Run: docker container inspect pause-197523 --format={{.State.Status}}
	I1101 10:29:39.588810  443746 fix.go:112] recreateIfNeeded on pause-197523: state=Running err=<nil>
	W1101 10:29:39.588847  443746 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:29:37.916677  442711 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-854151:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.455129833s)
	I1101 10:29:37.916707  442711 kic.go:203] duration metric: took 4.455279184s to extract preloaded images to volume ...
	W1101 10:29:37.916853  442711 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:29:37.916968  442711 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:29:37.975944  442711 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-854151 --name force-systemd-flag-854151 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-854151 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-854151 --network force-systemd-flag-854151 --ip 192.168.85.2 --volume force-systemd-flag-854151:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:29:38.291651  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Running}}
	I1101 10:29:38.318359  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:29:38.338802  442711 cli_runner.go:164] Run: docker exec force-systemd-flag-854151 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:29:38.400320  442711 oci.go:144] the created container "force-systemd-flag-854151" has a running status.
	I1101 10:29:38.400348  442711 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa...
	I1101 10:29:38.546138  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 10:29:38.546197  442711 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:29:38.576195  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:29:38.597863  442711 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:29:38.597887  442711 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-854151 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:29:38.663366  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:29:38.681001  442711 machine.go:94] provisionDockerMachine start ...
	I1101 10:29:38.681096  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:38.711402  442711 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:38.711746  442711 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1101 10:29:38.711755  442711 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:29:38.712536  442711 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39260->127.0.0.1:33389: read: connection reset by peer
	I1101 10:29:41.865391  442711 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-854151
	
	I1101 10:29:41.865415  442711 ubuntu.go:182] provisioning hostname "force-systemd-flag-854151"
	I1101 10:29:41.865486  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:41.882869  442711 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:41.883186  442711 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1101 10:29:41.883203  442711 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-854151 && echo "force-systemd-flag-854151" | sudo tee /etc/hostname
	I1101 10:29:42.049255  442711 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-854151
	
	I1101 10:29:42.049337  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:42.073512  442711 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:42.073891  442711 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1101 10:29:42.073916  442711 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-854151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-854151/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-854151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:29:42.235094  442711 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:29:42.235146  442711 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:29:42.235179  442711 ubuntu.go:190] setting up certificates
	I1101 10:29:42.235190  442711 provision.go:84] configureAuth start
	I1101 10:29:42.235266  442711 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-854151
	I1101 10:29:42.254380  442711 provision.go:143] copyHostCerts
	I1101 10:29:42.254433  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:29:42.254485  442711 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:29:42.254499  442711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:29:42.254582  442711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:29:42.254674  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:29:42.254698  442711 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:29:42.254703  442711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:29:42.254733  442711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:29:42.254783  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:29:42.254804  442711 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:29:42.254808  442711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:29:42.254835  442711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:29:42.254894  442711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-854151 san=[127.0.0.1 192.168.85.2 force-systemd-flag-854151 localhost minikube]
	I1101 10:29:39.592263  443746 out.go:252] * Updating the running docker "pause-197523" container ...
	I1101 10:29:39.592298  443746 machine.go:94] provisionDockerMachine start ...
	I1101 10:29:39.592390  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:39.609364  443746 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:39.609723  443746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33384 <nil> <nil>}
	I1101 10:29:39.609738  443746 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:29:39.765116  443746 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-197523
	
	I1101 10:29:39.765183  443746 ubuntu.go:182] provisioning hostname "pause-197523"
	I1101 10:29:39.765283  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:39.782611  443746 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:39.782942  443746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33384 <nil> <nil>}
	I1101 10:29:39.782958  443746 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-197523 && echo "pause-197523" | sudo tee /etc/hostname
	I1101 10:29:39.943373  443746 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-197523
	
	I1101 10:29:39.943466  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:39.961999  443746 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:39.962307  443746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33384 <nil> <nil>}
	I1101 10:29:39.962327  443746 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-197523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-197523/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-197523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:29:40.118723  443746 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:29:40.118791  443746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:29:40.118814  443746 ubuntu.go:190] setting up certificates
	I1101 10:29:40.118835  443746 provision.go:84] configureAuth start
	I1101 10:29:40.118896  443746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-197523
	I1101 10:29:40.137285  443746 provision.go:143] copyHostCerts
	I1101 10:29:40.137361  443746 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:29:40.137378  443746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:29:40.137458  443746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:29:40.137608  443746 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:29:40.137619  443746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:29:40.137650  443746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:29:40.137882  443746 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:29:40.137897  443746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:29:40.137929  443746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:29:40.137990  443746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.pause-197523 san=[127.0.0.1 192.168.76.2 localhost minikube pause-197523]
	I1101 10:29:40.351844  443746 provision.go:177] copyRemoteCerts
	I1101 10:29:40.351917  443746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:29:40.351962  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:40.371563  443746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:29:40.477431  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:29:40.494732  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:29:40.512178  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:29:40.530189  443746 provision.go:87] duration metric: took 411.339674ms to configureAuth
	I1101 10:29:40.530216  443746 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:29:40.530446  443746 config.go:182] Loaded profile config "pause-197523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:29:40.530552  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:40.547402  443746 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:40.547724  443746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33384 <nil> <nil>}
	I1101 10:29:40.547744  443746 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:29:42.876451  442711 provision.go:177] copyRemoteCerts
	I1101 10:29:42.876528  442711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:29:42.876578  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:42.893293  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:29:43.002383  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 10:29:43.002480  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:29:43.022432  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 10:29:43.022495  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 10:29:43.041115  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 10:29:43.041178  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:29:43.060264  442711 provision.go:87] duration metric: took 825.053849ms to configureAuth
	I1101 10:29:43.060290  442711 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:29:43.060487  442711 config.go:182] Loaded profile config "force-systemd-flag-854151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:29:43.060611  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:43.078415  442711 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:43.078734  442711 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1101 10:29:43.078755  442711 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:29:43.339966  442711 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:29:43.339990  442711 machine.go:97] duration metric: took 4.658971657s to provisionDockerMachine
	I1101 10:29:43.340000  442711 client.go:176] duration metric: took 10.583806957s to LocalClient.Create
	I1101 10:29:43.340068  442711 start.go:167] duration metric: took 10.583870408s to libmachine.API.Create "force-systemd-flag-854151"
	I1101 10:29:43.340085  442711 start.go:293] postStartSetup for "force-systemd-flag-854151" (driver="docker")
	I1101 10:29:43.340109  442711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:29:43.340187  442711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:29:43.340265  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:43.357951  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:29:43.462589  442711 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:29:43.466232  442711 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:29:43.466263  442711 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:29:43.466275  442711 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:29:43.466329  442711 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:29:43.466410  442711 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:29:43.466428  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 10:29:43.466529  442711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:29:43.474125  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:29:43.491612  442711 start.go:296] duration metric: took 151.513037ms for postStartSetup
	I1101 10:29:43.492004  442711 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-854151
	I1101 10:29:43.509227  442711 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/config.json ...
	I1101 10:29:43.509515  442711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:29:43.509573  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:43.526142  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:29:43.626944  442711 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:29:43.632151  442711 start.go:128] duration metric: took 10.87932789s to createHost
	I1101 10:29:43.632188  442711 start.go:83] releasing machines lock for "force-systemd-flag-854151", held for 10.879460264s
	I1101 10:29:43.632258  442711 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-854151
	I1101 10:29:43.649477  442711 ssh_runner.go:195] Run: cat /version.json
	I1101 10:29:43.649499  442711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:29:43.649535  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:43.649554  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:43.669801  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:29:43.684194  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:29:43.870773  442711 ssh_runner.go:195] Run: systemctl --version
	I1101 10:29:43.877209  442711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:29:43.916414  442711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:29:43.920717  442711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:29:43.920835  442711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:29:43.949634  442711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:29:43.949674  442711 start.go:496] detecting cgroup driver to use...
	I1101 10:29:43.949686  442711 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1101 10:29:43.949781  442711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:29:43.969121  442711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:29:43.982277  442711 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:29:43.982397  442711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:29:44.007675  442711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:29:44.029438  442711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:29:44.142715  442711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:29:44.269881  442711 docker.go:234] disabling docker service ...
	I1101 10:29:44.269952  442711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:29:44.291329  442711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:29:44.304908  442711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:29:44.426321  442711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:29:44.544052  442711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:29:44.558529  442711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:29:44.572257  442711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:29:44.572332  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.581674  442711 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:29:44.581878  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.591952  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.601379  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.611535  442711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:29:44.620044  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.629319  442711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.643538  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.652743  442711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:29:44.660463  442711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:29:44.668203  442711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:29:44.788361  442711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:29:44.912902  442711 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:29:44.913016  442711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:29:44.916922  442711 start.go:564] Will wait 60s for crictl version
	I1101 10:29:44.917030  442711 ssh_runner.go:195] Run: which crictl
	I1101 10:29:44.920582  442711 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:29:44.953266  442711 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:29:44.953408  442711 ssh_runner.go:195] Run: crio --version
	I1101 10:29:44.983480  442711 ssh_runner.go:195] Run: crio --version
	I1101 10:29:45.035436  442711 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:29:45.038843  442711 cli_runner.go:164] Run: docker network inspect force-systemd-flag-854151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:29:45.072973  442711 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:29:45.078518  442711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:29:45.096922  442711 kubeadm.go:884] updating cluster {Name:force-systemd-flag-854151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-854151 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:29:45.097064  442711 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:29:45.097137  442711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:29:45.176826  442711 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:29:45.176861  442711 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:29:45.176934  442711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:29:45.210205  442711 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:29:45.210233  442711 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:29:45.210243  442711 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:29:45.210343  442711 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-854151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-854151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:29:45.210443  442711 ssh_runner.go:195] Run: crio config
	I1101 10:29:45.297523  442711 cni.go:84] Creating CNI manager for ""
	I1101 10:29:45.297559  442711 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:29:45.297583  442711 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:29:45.297611  442711 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-854151 NodeName:force-systemd-flag-854151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:29:45.297877  442711 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-854151"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:29:45.297963  442711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:29:45.310976  442711 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:29:45.311117  442711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:29:45.324828  442711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1101 10:29:45.346687  442711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:29:45.365935  442711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1101 10:29:45.382906  442711 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:29:45.387517  442711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:29:45.402298  442711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:29:45.528628  442711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:29:45.546603  442711 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151 for IP: 192.168.85.2
	I1101 10:29:45.546667  442711 certs.go:195] generating shared ca certs ...
	I1101 10:29:45.546698  442711 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:45.546883  442711 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:29:45.546963  442711 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:29:45.547000  442711 certs.go:257] generating profile certs ...
	I1101 10:29:45.547102  442711 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key
	I1101 10:29:45.547135  442711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt with IP's: []
	I1101 10:29:46.415000  442711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt ...
	I1101 10:29:46.415090  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt: {Name:mkc1fd22bd54e1c2f89bde43293a11f96082f927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:46.415311  442711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key ...
	I1101 10:29:46.415358  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key: {Name:mkadb2d3b4c28a88bdcf3ae0e69b45b7c9bcb4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:46.415523  442711 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key.f180a540
	I1101 10:29:46.415580  442711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt.f180a540 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:29:45.911365  443746 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:29:45.911392  443746 machine.go:97] duration metric: took 6.319082175s to provisionDockerMachine
	I1101 10:29:45.911403  443746 start.go:293] postStartSetup for "pause-197523" (driver="docker")
	I1101 10:29:45.911414  443746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:29:45.911489  443746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:29:45.911536  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:45.932890  443746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:29:46.039309  443746 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:29:46.043834  443746 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:29:46.043860  443746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:29:46.043871  443746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:29:46.043922  443746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:29:46.044003  443746 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:29:46.044104  443746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:29:46.053260  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:29:46.079847  443746 start.go:296] duration metric: took 168.427247ms for postStartSetup
	I1101 10:29:46.079977  443746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:29:46.080048  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:46.100120  443746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:29:46.207863  443746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:29:46.213630  443746 fix.go:56] duration metric: took 6.641637966s for fixHost
	I1101 10:29:46.213651  443746 start.go:83] releasing machines lock for "pause-197523", held for 6.641683202s
	I1101 10:29:46.213742  443746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-197523
	I1101 10:29:46.235414  443746 ssh_runner.go:195] Run: cat /version.json
	I1101 10:29:46.235488  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:46.235749  443746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:29:46.235807  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:46.263011  443746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:29:46.279810  443746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:29:46.370106  443746 ssh_runner.go:195] Run: systemctl --version
	I1101 10:29:46.470997  443746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:29:46.549572  443746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:29:46.554936  443746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:29:46.555002  443746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:29:46.563922  443746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:29:46.563946  443746 start.go:496] detecting cgroup driver to use...
	I1101 10:29:46.563977  443746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:29:46.564026  443746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:29:46.581139  443746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:29:46.595884  443746 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:29:46.595951  443746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:29:46.612471  443746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:29:46.627873  443746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:29:46.799935  443746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:29:46.968841  443746 docker.go:234] disabling docker service ...
	I1101 10:29:46.968907  443746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:29:46.987033  443746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:29:47.001822  443746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:29:47.198200  443746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:29:47.419139  443746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:29:47.437343  443746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:29:47.452499  443746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:29:47.452591  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.461857  443746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:29:47.461925  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.471596  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.484341  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.492956  443746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:29:47.501276  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.511009  443746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.520232  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.529905  443746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:29:47.538067  443746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:29:47.545977  443746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:29:47.716024  443746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:29:47.940930  443746 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:29:47.941007  443746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:29:47.945680  443746 start.go:564] Will wait 60s for crictl version
	I1101 10:29:47.945756  443746 ssh_runner.go:195] Run: which crictl
	I1101 10:29:47.949590  443746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:29:48.001939  443746 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:29:48.002033  443746 ssh_runner.go:195] Run: crio --version
	I1101 10:29:48.050079  443746 ssh_runner.go:195] Run: crio --version
	I1101 10:29:48.090055  443746 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:29:48.092953  443746 cli_runner.go:164] Run: docker network inspect pause-197523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:29:48.126254  443746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:29:48.130464  443746 kubeadm.go:884] updating cluster {Name:pause-197523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-197523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:29:48.130598  443746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:29:48.130653  443746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:29:48.179952  443746 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:29:48.179973  443746 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:29:48.180027  443746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:29:48.208495  443746 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:29:48.208516  443746 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:29:48.208524  443746 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:29:48.208626  443746 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-197523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-197523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:29:48.208706  443746 ssh_runner.go:195] Run: crio config
	I1101 10:29:48.285829  443746 cni.go:84] Creating CNI manager for ""
	I1101 10:29:48.285903  443746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:29:48.285944  443746 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:29:48.285997  443746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-197523 NodeName:pause-197523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:29:48.286171  443746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-197523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:29:48.286285  443746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:29:48.296012  443746 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:29:48.296089  443746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:29:48.305288  443746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1101 10:29:48.319684  443746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:29:48.337968  443746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 10:29:48.353295  443746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:29:48.358301  443746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:29:48.534653  443746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:29:48.549671  443746 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523 for IP: 192.168.76.2
	I1101 10:29:48.549760  443746 certs.go:195] generating shared ca certs ...
	I1101 10:29:48.549778  443746 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:48.549943  443746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:29:48.549987  443746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:29:48.549995  443746 certs.go:257] generating profile certs ...
	I1101 10:29:48.550082  443746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.key
	I1101 10:29:48.550148  443746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/apiserver.key.a1c74574
	I1101 10:29:48.550185  443746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/proxy-client.key
	I1101 10:29:48.550302  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:29:48.550332  443746 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:29:48.550339  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:29:48.550360  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:29:48.550384  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:29:48.550404  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:29:48.550444  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:29:48.551070  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:29:48.573517  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:29:48.591752  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:29:48.609794  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:29:48.629344  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:29:48.650757  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:29:48.671722  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:29:48.692018  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:29:48.730239  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:29:48.753477  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:29:48.772344  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:29:48.792226  443746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:29:48.807654  443746 ssh_runner.go:195] Run: openssl version
	I1101 10:29:48.816890  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:29:48.826818  443746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:48.832172  443746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:48.832232  443746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:48.880129  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:29:48.890334  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:29:48.900309  443746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:29:48.906320  443746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:29:48.906384  443746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:29:48.958535  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:29:48.968929  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:29:48.978795  443746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:29:48.984672  443746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:29:48.984750  443746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:29:49.037943  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:29:49.047339  443746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:29:49.052064  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:29:49.096015  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:29:49.146968  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:29:49.244234  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:29:47.667603  442711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt.f180a540 ...
	I1101 10:29:47.667635  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt.f180a540: {Name:mk6b2a37132a3498f5409f04d7b4bb0504bcfda7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:47.667814  442711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key.f180a540 ...
	I1101 10:29:47.667831  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key.f180a540: {Name:mk6f2a0957589eaddbdc91953416f9f6e758d1c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:47.667910  442711 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt.f180a540 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt
	I1101 10:29:47.667992  442711 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key.f180a540 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key
	I1101 10:29:47.668054  442711 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.key
	I1101 10:29:47.668074  442711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.crt with IP's: []
	I1101 10:29:48.804318  442711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.crt ...
	I1101 10:29:48.804381  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.crt: {Name:mk6ca4f6e33ae9c047ccace6f2532c90c868494b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:48.805226  442711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.key ...
	I1101 10:29:48.805244  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.key: {Name:mk08c15e6950c6bbc5bd8dc874b1a85a31f5ecb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:48.805898  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 10:29:48.805925  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 10:29:48.805938  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 10:29:48.805950  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 10:29:48.805961  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 10:29:48.805974  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 10:29:48.805985  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 10:29:48.805995  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 10:29:48.806044  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:29:48.806077  442711 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:29:48.806086  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:29:48.806114  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:29:48.806138  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:29:48.806159  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:29:48.806200  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:29:48.806227  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:48.806241  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 10:29:48.806251  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 10:29:48.806775  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:29:48.828836  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:29:48.852485  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:29:48.869671  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:29:48.888399  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 10:29:48.909534  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:29:48.929855  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:29:48.947605  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:29:48.966763  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:29:48.988016  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:29:49.007829  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:29:49.026079  442711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:29:49.039360  442711 ssh_runner.go:195] Run: openssl version
	I1101 10:29:49.046656  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:29:49.055916  442711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:49.060301  442711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:49.060365  442711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:49.102342  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:29:49.111430  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:29:49.119511  442711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:29:49.123636  442711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:29:49.123716  442711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:29:49.187161  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:29:49.197534  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:29:49.213678  442711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:29:49.223199  442711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:29:49.223260  442711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:29:49.278920  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:29:49.288139  442711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:29:49.293165  442711 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:29:49.293229  442711 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-854151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-854151 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:29:49.293304  442711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:29:49.293361  442711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:29:49.332016  442711 cri.go:89] found id: ""
	I1101 10:29:49.332085  442711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:29:49.341906  442711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:29:49.349886  442711 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:29:49.349957  442711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:29:49.362219  442711 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:29:49.362239  442711 kubeadm.go:158] found existing configuration files:
	
	I1101 10:29:49.362291  442711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:29:49.377004  442711 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:29:49.377067  442711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:29:49.387264  442711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:29:49.409584  442711 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:29:49.409646  442711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:29:49.429348  442711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:29:49.449055  442711 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:29:49.449121  442711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:29:49.466844  442711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:29:49.491371  442711 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:29:49.491445  442711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:29:49.506874  442711 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:29:49.574695  442711 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:29:49.574864  442711 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:29:49.618510  442711 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:29:49.618591  442711 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:29:49.618633  442711 kubeadm.go:319] OS: Linux
	I1101 10:29:49.618685  442711 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:29:49.618739  442711 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:29:49.618794  442711 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:29:49.618850  442711 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:29:49.618904  442711 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:29:49.618974  442711 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:29:49.619027  442711 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:29:49.619082  442711 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:29:49.619133  442711 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:29:49.726271  442711 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:29:49.726414  442711 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:29:49.726537  442711 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:29:49.739233  442711 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:29:49.360802  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:29:49.566814  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:29:49.714500  443746 kubeadm.go:401] StartCluster: {Name:pause-197523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-197523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:29:49.714647  443746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:29:49.714878  443746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:29:49.790509  443746 cri.go:89] found id: "4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a"
	I1101 10:29:49.790527  443746 cri.go:89] found id: "3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84"
	I1101 10:29:49.790532  443746 cri.go:89] found id: "6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a"
	I1101 10:29:49.790535  443746 cri.go:89] found id: "c46b8aaeffa0082e965926a54cd85d2e052f19357bd88395e1bc98be5fa281f6"
	I1101 10:29:49.790538  443746 cri.go:89] found id: "87b9897087e6aaa64c721ab5ef446d1366a01bc265a5a4b3cdb2f51049e586ed"
	I1101 10:29:49.790542  443746 cri.go:89] found id: "d28a5938aa1092bb3305ae498633bf03b37fe8e68dcfe4b02fc20e42488fa9e4"
	I1101 10:29:49.790545  443746 cri.go:89] found id: "b76464b1416c8abe45c0967675f8a27c2908d2e8954a5595fd5cb5ed2329b506"
	I1101 10:29:49.790548  443746 cri.go:89] found id: "99e565cbd3b72a17fc891167c8a103997c60c46e217825056d511a99adc06362"
	I1101 10:29:49.790551  443746 cri.go:89] found id: "da788d7cea8ef8b74ba9aeddc734c4a58a0f8c301196a24317a0eebde5147eb2"
	I1101 10:29:49.790558  443746 cri.go:89] found id: "6c5a2fe54c508b435413ed345062b1d2aa084495afa6dda84e231a17054c1e31"
	I1101 10:29:49.790562  443746 cri.go:89] found id: "44db24a24cd979ca63b954e45e8c420af6e0dcf26da14d8102f7a645f5ef8c01"
	I1101 10:29:49.790565  443746 cri.go:89] found id: "7149d740a36107a476b99d86dc97bfbc2aa105f71c9a1ca2d72cc7dc8b2a5447"
	I1101 10:29:49.790568  443746 cri.go:89] found id: "4742f77b740db06e44bd84780999256c66d075efa0d5a0ffb535c8d55a421cf3"
	I1101 10:29:49.790571  443746 cri.go:89] found id: ""
	I1101 10:29:49.790767  443746 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:29:49.819428  443746 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:29:49Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:29:49.819527  443746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:29:49.851328  443746 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:29:49.851455  443746 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:29:49.851549  443746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:29:49.875148  443746 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:29:49.875850  443746 kubeconfig.go:125] found "pause-197523" server: "https://192.168.76.2:8443"
	I1101 10:29:49.876597  443746 kapi.go:59] client config for pause-197523: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:29:49.877265  443746 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:29:49.877394  443746 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:29:49.877417  443746 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:29:49.877458  443746 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:29:49.877480  443746 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:29:49.877999  443746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:29:49.900062  443746 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:29:49.900148  443746 kubeadm.go:602] duration metric: took 48.671161ms to restartPrimaryControlPlane
	I1101 10:29:49.900172  443746 kubeadm.go:403] duration metric: took 185.682293ms to StartCluster
	I1101 10:29:49.900215  443746 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:49.900314  443746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:29:49.901116  443746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:49.901435  443746 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:29:49.901910  443746 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:29:49.901996  443746 config.go:182] Loaded profile config "pause-197523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:29:49.905224  443746 out.go:179] * Verifying Kubernetes components...
	I1101 10:29:49.905308  443746 out.go:179] * Enabled addons: 
	I1101 10:29:49.745240  442711 out.go:252]   - Generating certificates and keys ...
	I1101 10:29:49.745354  442711 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:29:49.745438  442711 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:29:50.699952  442711 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:29:52.287258  442711 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:29:49.908080  443746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:29:49.908222  443746 addons.go:515] duration metric: took 6.30914ms for enable addons: enabled=[]
	I1101 10:29:50.251974  443746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:29:50.331535  443746 node_ready.go:35] waiting up to 6m0s for node "pause-197523" to be "Ready" ...
	I1101 10:29:52.578064  442711 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:29:52.906430  442711 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:29:52.996306  442711 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:29:52.996785  442711 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-854151 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:29:53.422703  442711 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:29:53.423071  442711 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-854151 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:29:54.577444  442711 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:29:54.821228  442711 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:29:55.278104  442711 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:29:55.278179  442711 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:29:55.539073  442711 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:29:56.377624  442711 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:29:57.746817  442711 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:29:58.321121  442711 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:29:58.960209  442711 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:29:58.961117  442711 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:29:58.964185  442711 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:29:56.021235  443746 node_ready.go:49] node "pause-197523" is "Ready"
	I1101 10:29:56.021315  443746 node_ready.go:38] duration metric: took 5.689748567s for node "pause-197523" to be "Ready" ...
	I1101 10:29:56.021343  443746 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:29:56.021433  443746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:29:56.051359  443746 api_server.go:72] duration metric: took 6.149856016s to wait for apiserver process to appear ...
	I1101 10:29:56.051436  443746 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:29:56.051471  443746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:29:56.148066  443746 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:29:56.148147  443746 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:29:56.551585  443746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:29:56.563849  443746 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:29:56.563877  443746 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:29:57.052035  443746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:29:57.062454  443746 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:29:57.062487  443746 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:29:57.552135  443746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:29:57.562571  443746 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:29:57.566743  443746 api_server.go:141] control plane version: v1.34.1
	I1101 10:29:57.566773  443746 api_server.go:131] duration metric: took 1.515316746s to wait for apiserver health ...
	I1101 10:29:57.566783  443746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:29:57.583544  443746 system_pods.go:59] 7 kube-system pods found
	I1101 10:29:57.583591  443746 system_pods.go:61] "coredns-66bc5c9577-svwdl" [bbc67d74-e6c7-40ab-a5d7-6677d46431af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:29:57.583600  443746 system_pods.go:61] "etcd-pause-197523" [9e3f44e6-6d0a-4684-a5f9-a0d0ad8ad738] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:29:57.583607  443746 system_pods.go:61] "kindnet-jhdpd" [79caf352-bf51-4b51-b25b-b7a3daf6cd52] Running
	I1101 10:29:57.583615  443746 system_pods.go:61] "kube-apiserver-pause-197523" [c88a0cc5-db1c-4467-a711-53f1289ebe04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:29:57.583624  443746 system_pods.go:61] "kube-controller-manager-pause-197523" [30e49b56-a708-4899-956f-f16d86f3ad93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:29:57.583634  443746 system_pods.go:61] "kube-proxy-mwwgw" [728cdaf0-253c-46c6-83e3-5cb2e800e24f] Running
	I1101 10:29:57.583641  443746 system_pods.go:61] "kube-scheduler-pause-197523" [bbe10c2f-8f5d-4566-a431-7cb64304c2fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:29:57.583654  443746 system_pods.go:74] duration metric: took 16.86552ms to wait for pod list to return data ...
	I1101 10:29:57.583663  443746 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:29:57.588574  443746 default_sa.go:45] found service account: "default"
	I1101 10:29:57.588598  443746 default_sa.go:55] duration metric: took 4.924023ms for default service account to be created ...
	I1101 10:29:57.588607  443746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:29:57.595781  443746 system_pods.go:86] 7 kube-system pods found
	I1101 10:29:57.595818  443746 system_pods.go:89] "coredns-66bc5c9577-svwdl" [bbc67d74-e6c7-40ab-a5d7-6677d46431af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:29:57.595828  443746 system_pods.go:89] "etcd-pause-197523" [9e3f44e6-6d0a-4684-a5f9-a0d0ad8ad738] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:29:57.595834  443746 system_pods.go:89] "kindnet-jhdpd" [79caf352-bf51-4b51-b25b-b7a3daf6cd52] Running
	I1101 10:29:57.595841  443746 system_pods.go:89] "kube-apiserver-pause-197523" [c88a0cc5-db1c-4467-a711-53f1289ebe04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:29:57.595847  443746 system_pods.go:89] "kube-controller-manager-pause-197523" [30e49b56-a708-4899-956f-f16d86f3ad93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:29:57.595853  443746 system_pods.go:89] "kube-proxy-mwwgw" [728cdaf0-253c-46c6-83e3-5cb2e800e24f] Running
	I1101 10:29:57.595859  443746 system_pods.go:89] "kube-scheduler-pause-197523" [bbe10c2f-8f5d-4566-a431-7cb64304c2fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:29:57.595865  443746 system_pods.go:126] duration metric: took 7.25348ms to wait for k8s-apps to be running ...
	I1101 10:29:57.595880  443746 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:29:57.595932  443746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:29:57.650786  443746 system_svc.go:56] duration metric: took 54.895688ms WaitForService to wait for kubelet
	I1101 10:29:57.650866  443746 kubeadm.go:587] duration metric: took 7.749367254s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:29:57.650900  443746 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:29:57.657429  443746 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:29:57.657510  443746 node_conditions.go:123] node cpu capacity is 2
	I1101 10:29:57.657537  443746 node_conditions.go:105] duration metric: took 6.614234ms to run NodePressure ...
	I1101 10:29:57.657561  443746 start.go:242] waiting for startup goroutines ...
	I1101 10:29:57.657598  443746 start.go:247] waiting for cluster config update ...
	I1101 10:29:57.657625  443746 start.go:256] writing updated cluster config ...
	I1101 10:29:57.658038  443746 ssh_runner.go:195] Run: rm -f paused
	I1101 10:29:57.662461  443746 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:29:57.663113  443746 kapi.go:59] client config for pause-197523: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:29:57.667140  443746 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-svwdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:29:58.967695  442711 out.go:252]   - Booting up control plane ...
	I1101 10:29:58.967802  442711 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:29:58.968172  442711 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:29:58.969485  442711 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:29:58.987860  442711 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:29:58.987972  442711 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:29:58.995914  442711 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:29:58.996018  442711 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:29:58.996358  442711 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:29:59.204438  442711 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:29:59.204563  442711 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:30:00.220316  442711 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003408899s
	I1101 10:30:00.220433  442711 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:30:00.220519  442711 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:30:00.220613  442711 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:30:00.220695  442711 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 10:29:59.676559  443746 pod_ready.go:104] pod "coredns-66bc5c9577-svwdl" is not "Ready", error: <nil>
	I1101 10:30:01.673247  443746 pod_ready.go:94] pod "coredns-66bc5c9577-svwdl" is "Ready"
	I1101 10:30:01.673289  443746 pod_ready.go:86] duration metric: took 4.006079328s for pod "coredns-66bc5c9577-svwdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:01.675791  443746 pod_ready.go:83] waiting for pod "etcd-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:03.181815  443746 pod_ready.go:94] pod "etcd-pause-197523" is "Ready"
	I1101 10:30:03.181844  443746 pod_ready.go:86] duration metric: took 1.506023853s for pod "etcd-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:03.184809  443746 pod_ready.go:83] waiting for pod "kube-apiserver-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:03.189765  443746 pod_ready.go:94] pod "kube-apiserver-pause-197523" is "Ready"
	I1101 10:30:03.189802  443746 pod_ready.go:86] duration metric: took 4.966166ms for pod "kube-apiserver-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:03.192204  443746 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:30:05.200698  443746 pod_ready.go:104] pod "kube-controller-manager-pause-197523" is not "Ready", error: <nil>
	I1101 10:30:06.197504  443746 pod_ready.go:94] pod "kube-controller-manager-pause-197523" is "Ready"
	I1101 10:30:06.197541  443746 pod_ready.go:86] duration metric: took 3.005312449s for pod "kube-controller-manager-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:06.199795  443746 pod_ready.go:83] waiting for pod "kube-proxy-mwwgw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:06.205403  443746 pod_ready.go:94] pod "kube-proxy-mwwgw" is "Ready"
	I1101 10:30:06.205433  443746 pod_ready.go:86] duration metric: took 5.61035ms for pod "kube-proxy-mwwgw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:06.271118  443746 pod_ready.go:83] waiting for pod "kube-scheduler-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:06.670509  443746 pod_ready.go:94] pod "kube-scheduler-pause-197523" is "Ready"
	I1101 10:30:06.670536  443746 pod_ready.go:86] duration metric: took 399.387794ms for pod "kube-scheduler-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:06.670549  443746 pod_ready.go:40] duration metric: took 9.008005361s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:30:06.740500  443746 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:30:06.743628  443746 out.go:179] * Done! kubectl is now configured to use "pause-197523" cluster and "default" namespace by default
	I1101 10:30:03.638128  442711 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.423881053s
	I1101 10:30:05.208802  442711 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.996964171s
	I1101 10:30:06.717926  442711 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.504658869s
	I1101 10:30:06.743244  442711 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:30:06.806947  442711 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:30:06.827529  442711 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:30:06.827743  442711 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-flag-854151 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:30:06.853335  442711 kubeadm.go:319] [bootstrap-token] Using token: ayqqn8.oxxi48m2zksj08s4
	I1101 10:30:06.856364  442711 out.go:252]   - Configuring RBAC rules ...
	I1101 10:30:06.856489  442711 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:30:06.869921  442711 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:30:06.884700  442711 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:30:06.890976  442711 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:30:06.897585  442711 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:30:06.905221  442711 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:30:07.129058  442711 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:30:07.653376  442711 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:30:08.129078  442711 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:30:08.130671  442711 kubeadm.go:319] 
	I1101 10:30:08.130759  442711 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:30:08.130772  442711 kubeadm.go:319] 
	I1101 10:30:08.130854  442711 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:30:08.130864  442711 kubeadm.go:319] 
	I1101 10:30:08.130891  442711 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:30:08.130957  442711 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:30:08.131013  442711 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:30:08.131022  442711 kubeadm.go:319] 
	I1101 10:30:08.131079  442711 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:30:08.131088  442711 kubeadm.go:319] 
	I1101 10:30:08.131139  442711 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:30:08.131148  442711 kubeadm.go:319] 
	I1101 10:30:08.131203  442711 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:30:08.131296  442711 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:30:08.131375  442711 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:30:08.131386  442711 kubeadm.go:319] 
	I1101 10:30:08.131475  442711 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:30:08.131559  442711 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:30:08.131567  442711 kubeadm.go:319] 
	I1101 10:30:08.131655  442711 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ayqqn8.oxxi48m2zksj08s4 \
	I1101 10:30:08.131766  442711 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 10:30:08.131794  442711 kubeadm.go:319] 	--control-plane 
	I1101 10:30:08.131803  442711 kubeadm.go:319] 
	I1101 10:30:08.131892  442711 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:30:08.131900  442711 kubeadm.go:319] 
	I1101 10:30:08.131986  442711 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ayqqn8.oxxi48m2zksj08s4 \
	I1101 10:30:08.132096  442711 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 10:30:08.137278  442711 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:30:08.137531  442711 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:30:08.137649  442711 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:30:08.137673  442711 cni.go:84] Creating CNI manager for ""
	I1101 10:30:08.137680  442711 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:30:08.140660  442711 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:30:08.143548  442711 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:30:08.151132  442711 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:30:08.151151  442711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:30:08.171176  442711 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:30:08.491548  442711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:30:08.491707  442711 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-flag-854151 minikube.k8s.io/updated_at=2025_11_01T10_30_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=force-systemd-flag-854151 minikube.k8s.io/primary=true
	I1101 10:30:08.491715  442711 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:30:08.691156  442711 kubeadm.go:1114] duration metric: took 199.511802ms to wait for elevateKubeSystemPrivileges
	I1101 10:30:08.691216  442711 ops.go:34] apiserver oom_adj: -16
	I1101 10:30:08.691225  442711 kubeadm.go:403] duration metric: took 19.398001705s to StartCluster
	I1101 10:30:08.691241  442711 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:30:08.691322  442711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:30:08.692286  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:30:08.692527  442711 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:30:08.692541  442711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:30:08.692821  442711 config.go:182] Loaded profile config "force-systemd-flag-854151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:30:08.692866  442711 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:30:08.692942  442711 addons.go:70] Setting storage-provisioner=true in profile "force-systemd-flag-854151"
	I1101 10:30:08.692957  442711 addons.go:239] Setting addon storage-provisioner=true in "force-systemd-flag-854151"
	I1101 10:30:08.692963  442711 addons.go:70] Setting default-storageclass=true in profile "force-systemd-flag-854151"
	I1101 10:30:08.692980  442711 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-854151"
	I1101 10:30:08.692984  442711 host.go:66] Checking if "force-systemd-flag-854151" exists ...
	I1101 10:30:08.693408  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:30:08.693468  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:30:08.698563  442711 out.go:179] * Verifying Kubernetes components...
	I1101 10:30:08.701543  442711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:30:08.726322  442711 kapi.go:59] client config for force-systemd-flag-854151: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:30:08.726861  442711 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:30:08.726875  442711 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:30:08.726881  442711 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:30:08.726886  442711 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:30:08.726890  442711 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:30:08.726941  442711 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1101 10:30:08.727256  442711 addons.go:239] Setting addon default-storageclass=true in "force-systemd-flag-854151"
	I1101 10:30:08.727283  442711 host.go:66] Checking if "force-systemd-flag-854151" exists ...
	I1101 10:30:08.727719  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:30:08.747477  442711 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:30:08.751301  442711 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:30:08.751327  442711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:30:08.751407  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:30:08.769328  442711 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:30:08.769354  442711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:30:08.769415  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:30:08.789845  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:30:08.803442  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:30:09.080797  442711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:30:09.164419  442711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:30:09.171333  442711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:30:09.172813  442711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:30:09.922830  442711 kapi.go:59] client config for force-systemd-flag-854151: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:30:09.923138  442711 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:30:09.923191  442711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:30:09.923301  442711 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:30:09.923858  442711 kapi.go:59] client config for force-systemd-flag-854151: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:30:09.945098  442711 api_server.go:72] duration metric: took 1.252541531s to wait for apiserver process to appear ...
	I1101 10:30:09.945131  442711 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:30:09.945150  442711 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:30:09.982736  442711 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:30:09.987040  442711 api_server.go:141] control plane version: v1.34.1
	I1101 10:30:09.987072  442711 api_server.go:131] duration metric: took 41.933519ms to wait for apiserver health ...
	I1101 10:30:09.987089  442711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:30:09.992601  442711 system_pods.go:59] 5 kube-system pods found
	I1101 10:30:09.992640  442711 system_pods.go:61] "etcd-force-systemd-flag-854151" [330b23ef-a7bc-4bb3-ba65-bce979dd6bfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:30:09.992649  442711 system_pods.go:61] "kube-apiserver-force-systemd-flag-854151" [693e44ff-6368-44fe-9d06-4f678d8ce783] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:30:09.992660  442711 system_pods.go:61] "kube-controller-manager-force-systemd-flag-854151" [b25df811-25d6-4016-b326-5bfd3bd01721] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:30:09.992667  442711 system_pods.go:61] "kube-scheduler-force-systemd-flag-854151" [180a9d7b-4f5b-483e-b2af-6987757486ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:30:09.992672  442711 system_pods.go:61] "storage-provisioner" [13bc52d4-0358-453e-aa75-b9b88936626b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:30:09.992683  442711 system_pods.go:74] duration metric: took 5.589033ms to wait for pod list to return data ...
	I1101 10:30:09.992699  442711 kubeadm.go:587] duration metric: took 1.300149253s to wait for: map[apiserver:true system_pods:true]
	I1101 10:30:09.992716  442711 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:30:09.995785  442711 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.510014719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.510851587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.525023543Z" level=info msg="Starting container: 6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a" id=24e1d850-2dd2-483d-ba90-7ecf4caf0f3e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.565625747Z" level=info msg="Started container" PID=2316 containerID=6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a description=kube-system/kindnet-jhdpd/kindnet-cni id=24e1d850-2dd2-483d-ba90-7ecf4caf0f3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c29766a43ee888fa895df77af04bd1ed5540c3ce52e0c3aa29dfc46e380800e
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.664430076Z" level=info msg="Created container 3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84: kube-system/kube-controller-manager-pause-197523/kube-controller-manager" id=949bd11a-8dee-4621-a96e-7ef4a987674e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.666212761Z" level=info msg="Starting container: 3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84" id=f23c6ce3-2ed8-42bb-8abf-caf36a987655 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.68857146Z" level=info msg="Started container" PID=2350 containerID=3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84 description=kube-system/kube-controller-manager-pause-197523/kube-controller-manager id=f23c6ce3-2ed8-42bb-8abf-caf36a987655 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1fd946064ed5a1c83cbca5eb8fd69f1bca2ebceb23d35dbd8c58e591dd73560e
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.716867962Z" level=info msg="Created container 4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a: kube-system/coredns-66bc5c9577-svwdl/coredns" id=ed0eb5d3-f6ab-48b5-a19a-0e07ea743b9b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.72254626Z" level=info msg="Starting container: 4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a" id=e851a934-8c61-4290-8f14-3bcae5d8fddf name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.730073359Z" level=info msg="Started container" PID=2358 containerID=4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a description=kube-system/coredns-66bc5c9577-svwdl/coredns id=e851a934-8c61-4290-8f14-3bcae5d8fddf name=/runtime.v1.RuntimeService/StartContainer sandboxID=34694322e55facff1146ffd185dc11f071d5a82424e041b50f9045fdb95c8009
	Nov 01 10:29:50 pause-197523 crio[2072]: time="2025-11-01T10:29:50.512441407Z" level=info msg="Created container b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7: kube-system/kube-proxy-mwwgw/kube-proxy" id=9e2cb423-ebf6-4904-81e2-8ded1da80323 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:29:50 pause-197523 crio[2072]: time="2025-11-01T10:29:50.514894213Z" level=info msg="Starting container: b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7" id=2125f4e1-14a0-4769-a246-dfd3d6c46a38 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:29:50 pause-197523 crio[2072]: time="2025-11-01T10:29:50.520172004Z" level=info msg="Started container" PID=2334 containerID=b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7 description=kube-system/kube-proxy-mwwgw/kube-proxy id=2125f4e1-14a0-4769-a246-dfd3d6c46a38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f159266330daa9538d84151e49286bc7c50804c4dff500244c952c1e0fa9975
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.077900207Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.154895128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.155115906Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.155218775Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.219352296Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.219550559Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.219664816Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.297992094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.29818673Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.298305714Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.302747435Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.302799513Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4b464843f33d1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   34694322e55fa       coredns-66bc5c9577-svwdl               kube-system
	3c3fa591e90f0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   1fd946064ed5a       kube-controller-manager-pause-197523   kube-system
	b85d566999f00       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago       Running             kube-proxy                1                   2f159266330da       kube-proxy-mwwgw                       kube-system
	6f72b51f09b07       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   7c29766a43ee8       kindnet-jhdpd                          kube-system
	c46b8aaeffa00       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            1                   ff7d8c3bf49da       kube-scheduler-pause-197523            kube-system
	87b9897087e6a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   bfac7e8318e0c       etcd-pause-197523                      kube-system
	d28a5938aa109       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   779254e2c3d00       kube-apiserver-pause-197523            kube-system
	b76464b1416c8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   34694322e55fa       coredns-66bc5c9577-svwdl               kube-system
	99e565cbd3b72       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   2f159266330da       kube-proxy-mwwgw                       kube-system
	da788d7cea8ef       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   7c29766a43ee8       kindnet-jhdpd                          kube-system
	6c5a2fe54c508       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   bfac7e8318e0c       etcd-pause-197523                      kube-system
	44db24a24cd97       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   779254e2c3d00       kube-apiserver-pause-197523            kube-system
	7149d740a3610       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   ff7d8c3bf49da       kube-scheduler-pause-197523            kube-system
	4742f77b740db       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   1fd946064ed5a       kube-controller-manager-pause-197523   kube-system
	
	
	==> coredns [4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59001 - 21446 "HINFO IN 2723435776611723228.7685608349230344559. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003934867s
	
	
	==> coredns [b76464b1416c8abe45c0967675f8a27c2908d2e8954a5595fd5cb5ed2329b506] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54012 - 23652 "HINFO IN 5500107722844301064.582281186405622169. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023656971s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-197523
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-197523
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=pause-197523
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_28_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:28:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-197523
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:30:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:29:35 +0000   Sat, 01 Nov 2025 10:28:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:29:35 +0000   Sat, 01 Nov 2025 10:28:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:29:35 +0000   Sat, 01 Nov 2025 10:28:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:29:35 +0000   Sat, 01 Nov 2025 10:29:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-197523
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                8c3055d4-6ef2-4330-b24a-ecab648c0a33
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-svwdl                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-197523                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         86s
	  kube-system                 kindnet-jhdpd                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-197523             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-197523    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-mwwgw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-197523             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Warning  CgroupV1                 92s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node pause-197523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node pause-197523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s (x8 over 92s)  kubelet          Node pause-197523 status is now: NodeHasSufficientPID
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-197523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-197523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-197523 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s                node-controller  Node pause-197523 event: Registered Node pause-197523 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-197523 status is now: NodeReady
	  Warning  ContainerGCFailed        24s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           12s                node-controller  Node pause-197523 event: Registered Node pause-197523 in Controller
	
	
	==> dmesg <==
	[  +4.195210] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:58] overlayfs: idmapped layers are currently not supported
	[  +4.848874] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:06] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6c5a2fe54c508b435413ed345062b1d2aa084495afa6dda84e231a17054c1e31] <==
	{"level":"warn","ts":"2025-11-01T10:28:43.201055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.219836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.245909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.270057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.290499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.357552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.411876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35968","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:29:40.718657Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:29:40.718711Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-197523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:29:40.718799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:29:40.992617Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:29:40.994086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T10:29:40.994129Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:29:40.994177Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:29:40.994186Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:29:40.994164Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-01T10:29:40.994230Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:29:40.994253Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:29:40.994319Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:29:40.994359Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:29:40.994394Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:29:40.997479Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-01T10:29:40.997549Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:29:40.997622Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-01T10:29:40.997668Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-197523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [87b9897087e6aaa64c721ab5ef446d1366a01bc265a5a4b3cdb2f51049e586ed] <==
	{"level":"warn","ts":"2025-11-01T10:29:52.955280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.013820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.054675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.098226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.134106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.181659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.266947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.305105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.405865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.446856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.494584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.550145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.588918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.628419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.675053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.757783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.847925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.915995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:54.033957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:54.086545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:54.125941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56172","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:29:57.106330Z","caller":"traceutil/trace.go:172","msg":"trace[1639363615] linearizableReadLoop","detail":"{readStateIndex:543; appliedIndex:543; }","duration":"121.856722ms","start":"2025-11-01T10:29:56.984457Z","end":"2025-11-01T10:29:57.106314Z","steps":["trace[1639363615] 'read index received'  (duration: 121.839089ms)","trace[1639363615] 'applied index is now lower than readState.Index'  (duration: 17.108µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:29:57.106500Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.024396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:kube-scheduler\" limit:1 ","response":"range_response_count:1 size:1835"}
	{"level":"info","ts":"2025-11-01T10:29:57.106544Z","caller":"traceutil/trace.go:172","msg":"trace[32956699] range","detail":"{range_begin:/registry/clusterroles/system:kube-scheduler; range_end:; response_count:1; response_revision:519; }","duration":"122.074382ms","start":"2025-11-01T10:29:56.984453Z","end":"2025-11-01T10:29:57.106527Z","steps":["trace[32956699] 'agreement among raft nodes before linearized reading'  (duration: 121.937428ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:29:57.119612Z","caller":"traceutil/trace.go:172","msg":"trace[770968399] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"135.435785ms","start":"2025-11-01T10:29:56.984159Z","end":"2025-11-01T10:29:57.119595Z","steps":["trace[770968399] 'process raft request'  (duration: 122.554774ms)","trace[770968399] 'compare'  (duration: 12.773457ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:30:11 up  2:12,  0 user,  load average: 4.77, 3.66, 2.57
	Linux pause-197523 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a] <==
	I1101 10:29:49.754074       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:29:49.754346       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:29:49.754505       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:29:49.754518       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:29:49.754533       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:29:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:29:50.065156       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:29:50.089757       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:29:50.089869       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:29:50.095106       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:29:56.194240       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:29:56.194280       1 metrics.go:72] Registering metrics
	I1101 10:29:56.194349       1 controller.go:711] "Syncing nftables rules"
	I1101 10:30:00.077363       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:30:00.077521       1 main.go:301] handling current node
	I1101 10:30:10.064724       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:30:10.064773       1 main.go:301] handling current node
	
	
	==> kindnet [da788d7cea8ef8b74ba9aeddc734c4a58a0f8c301196a24317a0eebde5147eb2] <==
	I1101 10:28:55.221903       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:28:55.222322       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:28:55.222505       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:28:55.222559       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:28:55.222600       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:28:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:28:55.421385       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:28:55.421457       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:28:55.421489       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:28:55.422440       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:29:25.421716       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:29:25.423862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:29:25.424216       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:29:25.424413       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:29:26.822446       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:29:26.822489       1 metrics.go:72] Registering metrics
	I1101 10:29:26.822554       1 controller.go:711] "Syncing nftables rules"
	I1101 10:29:35.427955       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:29:35.428011       1 main.go:301] handling current node
	
	
	==> kube-apiserver [44db24a24cd979ca63b954e45e8c420af6e0dcf26da14d8102f7a645f5ef8c01] <==
	W1101 10:29:40.738307       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.738361       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.738416       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.738478       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.738526       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.739753       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.739809       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.739954       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740015       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740057       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740121       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740160       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740195       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740234       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740274       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740312       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741061       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741120       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741179       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741241       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741292       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741347       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741446       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741548       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741645       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d28a5938aa1092bb3305ae498633bf03b37fe8e68dcfe4b02fc20e42488fa9e4] <==
	I1101 10:29:56.028888       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:29:56.028895       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:29:56.038343       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:29:56.038432       1 policy_source.go:240] refreshing policies
	I1101 10:29:56.052548       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:29:56.055187       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:29:56.075994       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:29:56.076133       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:29:56.076393       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:29:56.082548       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:29:56.103259       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:29:56.110527       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:29:56.117098       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:29:56.119420       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:29:56.120837       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:29:56.120958       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:29:56.131942       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:29:56.132608       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1101 10:29:56.182455       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:29:56.652108       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:29:57.783109       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:29:59.249123       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:29:59.276664       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:29:59.471476       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:29:59.573586       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84] <==
	I1101 10:29:59.257110       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:29:59.258629       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:29:59.265345       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:29:59.265558       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:29:59.265476       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:29:59.265921       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:29:59.266005       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:29:59.266400       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-197523"
	I1101 10:29:59.266539       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:29:59.269780       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:29:59.269865       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:29:59.269963       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:29:59.270663       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:29:59.272210       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:29:59.272664       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:29:59.273786       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:29:59.276871       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:29:59.279614       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:29:59.279910       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:29:59.280076       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:29:59.280522       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:29:59.283286       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:29:59.285602       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:29:59.289631       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:29:59.294872       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-controller-manager [4742f77b740db06e44bd84780999256c66d075efa0d5a0ffb535c8d55a421cf3] <==
	I1101 10:28:52.528875       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:28:52.529090       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:28:52.536385       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-197523" podCIDRs=["10.244.0.0/24"]
	I1101 10:28:52.539501       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:28:52.539706       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:28:52.542971       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:28:52.543090       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:28:52.543102       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:28:52.543111       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:28:52.543120       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:28:52.543131       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:28:52.543167       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:28:52.549784       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:28:52.550002       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:28:52.569838       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:28:52.569911       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:28:52.569942       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:28:52.582382       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:28:52.592776       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:28:52.593078       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:28:52.593162       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:28:52.593268       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-197523"
	I1101 10:28:52.593752       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:28:52.618555       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:29:37.601403       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [99e565cbd3b72a17fc891167c8a103997c60c46e217825056d511a99adc06362] <==
	I1101 10:28:55.256002       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:28:55.339720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:28:55.442579       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:28:55.442679       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:28:55.442798       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:28:55.463818       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:28:55.463957       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:28:55.471808       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:28:55.472328       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:28:55.472523       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:28:55.474068       1 config.go:200] "Starting service config controller"
	I1101 10:28:55.474118       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:28:55.474159       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:28:55.474186       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:28:55.474221       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:28:55.474247       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:28:55.477062       1 config.go:309] "Starting node config controller"
	I1101 10:28:55.477130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:28:55.477160       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:28:55.574263       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:28:55.574410       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:28:55.574431       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7] <==
	I1101 10:29:51.309868       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:29:52.528481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:29:56.160313       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:29:56.167682       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:29:56.167810       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:29:58.109910       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:29:58.133870       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:29:58.297828       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:29:58.298246       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:29:58.309979       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:29:58.311404       1 config.go:200] "Starting service config controller"
	I1101 10:29:58.325515       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:29:58.325576       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:29:58.325582       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:29:58.325596       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:29:58.325600       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:29:58.362151       1 config.go:309] "Starting node config controller"
	I1101 10:29:58.362230       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:29:58.362261       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:29:58.426833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:29:58.429744       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:29:58.429814       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7149d740a36107a476b99d86dc97bfbc2aa105f71c9a1ca2d72cc7dc8b2a5447] <==
	E1101 10:28:44.640587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:28:44.640701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:28:45.489659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:28:45.498402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:28:45.580442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:28:45.636789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:28:45.683265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:28:45.703840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:28:45.723393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:28:45.750070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:28:45.776323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:28:45.804610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:28:45.829492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:28:45.898114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:28:45.950592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:28:45.964864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:28:45.969512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:28:46.140437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 10:28:48.869760       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:29:40.725788       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:29:40.725822       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:29:40.725844       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:29:40.725882       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:29:40.726142       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:29:40.726159       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c46b8aaeffa0082e965926a54cd85d2e052f19357bd88395e1bc98be5fa281f6] <==
	I1101 10:29:55.201007       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:29:59.016264       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:29:59.016423       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:29:59.023387       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:29:59.023845       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:29:59.023910       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:29:59.023964       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:29:59.030635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:29:59.030727       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:29:59.030783       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:29:59.030816       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:29:59.125792       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:29:59.133830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:29:59.134732       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.236150    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jhdpd\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="79caf352-bf51-4b51-b25b-b7a3daf6cd52" pod="kube-system/kindnet-jhdpd"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.236389    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mwwgw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="728cdaf0-253c-46c6-83e3-5cb2e800e24f" pod="kube-system/kube-proxy-mwwgw"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.236625    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f473bb29a7c49ae0b00b136ba9170d53" pod="kube-system/kube-controller-manager-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.236868    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7a182863f84e1dad627e81cdc2134cb1" pod="kube-system/kube-scheduler-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: I1101 10:29:49.262296    1313 scope.go:117] "RemoveContainer" containerID="b76464b1416c8abe45c0967675f8a27c2908d2e8954a5595fd5cb5ed2329b506"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.262845    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mwwgw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="728cdaf0-253c-46c6-83e3-5cb2e800e24f" pod="kube-system/kube-proxy-mwwgw"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263022    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-svwdl\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbc67d74-e6c7-40ab-a5d7-6677d46431af" pod="kube-system/coredns-66bc5c9577-svwdl"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263173    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f473bb29a7c49ae0b00b136ba9170d53" pod="kube-system/kube-controller-manager-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263312    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7a182863f84e1dad627e81cdc2134cb1" pod="kube-system/kube-scheduler-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263488    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e080f67ff1fb03855bd1c1d221919660" pod="kube-system/etcd-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263625    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="882dec79a7d4eb821a4eee699c3f2bb4" pod="kube-system/kube-apiserver-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263758    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jhdpd\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="79caf352-bf51-4b51-b25b-b7a3daf6cd52" pod="kube-system/kindnet-jhdpd"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.708380    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-svwdl\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="bbc67d74-e6c7-40ab-a5d7-6677d46431af" pod="kube-system/coredns-66bc5c9577-svwdl"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.708548    1313 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-197523\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.708567    1313 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-197523\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.708775    1313 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-197523\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.746479    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-197523\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="f473bb29a7c49ae0b00b136ba9170d53" pod="kube-system/kube-controller-manager-pause-197523"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.818148    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-197523\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="7a182863f84e1dad627e81cdc2134cb1" pod="kube-system/kube-scheduler-pause-197523"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.956869    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-197523\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="e080f67ff1fb03855bd1c1d221919660" pod="kube-system/etcd-pause-197523"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.974726    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-197523\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="882dec79a7d4eb821a4eee699c3f2bb4" pod="kube-system/kube-apiserver-pause-197523"
	Nov 01 10:29:56 pause-197523 kubelet[1313]: E1101 10:29:56.000337    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-jhdpd\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="79caf352-bf51-4b51-b25b-b7a3daf6cd52" pod="kube-system/kindnet-jhdpd"
	Nov 01 10:29:56 pause-197523 kubelet[1313]: E1101 10:29:56.047135    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-mwwgw\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="728cdaf0-253c-46c6-83e3-5cb2e800e24f" pod="kube-system/kube-proxy-mwwgw"
	Nov 01 10:30:07 pause-197523 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:30:07 pause-197523 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:30:07 pause-197523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-197523 -n pause-197523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-197523 -n pause-197523: exit status 2 (373.394974ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-197523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-197523
helpers_test.go:243: (dbg) docker inspect pause-197523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107",
	        "Created": "2025-11-01T10:28:20.242203934Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 438917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:28:20.32920004Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107/hostname",
	        "HostsPath": "/var/lib/docker/containers/9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107/hosts",
	        "LogPath": "/var/lib/docker/containers/9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107/9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107-json.log",
	        "Name": "/pause-197523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-197523:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-197523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9adb215252660e900ac6cb23336191e6b5aa0726c557d4c071ec9ab170aac107",
	                "LowerDir": "/var/lib/docker/overlay2/85a005fc744cdaba46ed7b46e843b12ca411e304702702e29321d1ef27c39608-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85a005fc744cdaba46ed7b46e843b12ca411e304702702e29321d1ef27c39608/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85a005fc744cdaba46ed7b46e843b12ca411e304702702e29321d1ef27c39608/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85a005fc744cdaba46ed7b46e843b12ca411e304702702e29321d1ef27c39608/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-197523",
	                "Source": "/var/lib/docker/volumes/pause-197523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-197523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-197523",
	                "name.minikube.sigs.k8s.io": "pause-197523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "26224c3703e50552df403e8123027b5ce5cc80e7bebdddbac6e19889c12769fe",
	            "SandboxKey": "/var/run/docker/netns/26224c3703e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-197523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:28:59:05:1c:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f41f9d3ed4581b14e0c3dce3ce74200150d668805cc1e4da30ba9f5353e7a79e",
	                    "EndpointID": "2159e744649b89aa3ad921e037b9cf65ef10756ed2dfcb314e2b231ec57bdbbb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-197523",
	                        "9adb21525266"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-197523 -n pause-197523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-197523 -n pause-197523: exit status 2 (401.697351ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-197523 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-197523 logs -n 25: (1.742336172s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p missing-upgrade-843745                                                                                                                │ missing-upgrade-843745    │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │ 01 Nov 25 10:25 UTC │
	│ ssh     │ -p NoKubernetes-180480 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │                     │
	│ start   │ -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │ 01 Nov 25 10:26 UTC │
	│ stop    │ -p NoKubernetes-180480                                                                                                                   │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ start   │ -p NoKubernetes-180480 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ ssh     │ -p NoKubernetes-180480 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │                     │
	│ delete  │ -p NoKubernetes-180480                                                                                                                   │ NoKubernetes-180480       │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ start   │ -p stopped-upgrade-261821 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-261821    │ jenkins │ v1.32.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ stop    │ -p kubernetes-upgrade-683031                                                                                                             │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ start   │ -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:28 UTC │
	│ stop    │ stopped-upgrade-261821 stop                                                                                                              │ stopped-upgrade-261821    │ jenkins │ v1.32.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	│ start   │ -p stopped-upgrade-261821 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-261821    │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:27 UTC │
	│ delete  │ -p stopped-upgrade-261821                                                                                                                │ stopped-upgrade-261821    │ jenkins │ v1.37.0 │ 01 Nov 25 10:27 UTC │ 01 Nov 25 10:27 UTC │
	│ start   │ -p running-upgrade-645343 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-645343    │ jenkins │ v1.32.0 │ 01 Nov 25 10:27 UTC │ 01 Nov 25 10:27 UTC │
	│ start   │ -p running-upgrade-645343 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-645343    │ jenkins │ v1.37.0 │ 01 Nov 25 10:27 UTC │ 01 Nov 25 10:28 UTC │
	│ delete  │ -p running-upgrade-645343                                                                                                                │ running-upgrade-645343    │ jenkins │ v1.37.0 │ 01 Nov 25 10:28 UTC │ 01 Nov 25 10:28 UTC │
	│ start   │ -p pause-197523 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-197523              │ jenkins │ v1.37.0 │ 01 Nov 25 10:28 UTC │ 01 Nov 25 10:29 UTC │
	│ start   │ -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:28 UTC │                     │
	│ start   │ -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:28 UTC │ 01 Nov 25 10:29 UTC │
	│ delete  │ -p kubernetes-upgrade-683031                                                                                                             │ kubernetes-upgrade-683031 │ jenkins │ v1.37.0 │ 01 Nov 25 10:29 UTC │ 01 Nov 25 10:29 UTC │
	│ start   │ -p force-systemd-flag-854151 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-854151 │ jenkins │ v1.37.0 │ 01 Nov 25 10:29 UTC │ 01 Nov 25 10:30 UTC │
	│ start   │ -p pause-197523 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-197523              │ jenkins │ v1.37.0 │ 01 Nov 25 10:29 UTC │ 01 Nov 25 10:30 UTC │
	│ pause   │ -p pause-197523 --alsologtostderr -v=5                                                                                                   │ pause-197523              │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ force-systemd-flag-854151 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                     │ force-systemd-flag-854151 │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-flag-854151                                                                                                             │ force-systemd-flag-854151 │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:29:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:29:39.295720  443746 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:29:39.296036  443746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:29:39.296067  443746 out.go:374] Setting ErrFile to fd 2...
	I1101 10:29:39.296087  443746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:29:39.296391  443746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:29:39.296850  443746 out.go:368] Setting JSON to false
	I1101 10:29:39.298019  443746 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7929,"bootTime":1761985051,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:29:39.298116  443746 start.go:143] virtualization:  
	I1101 10:29:39.303285  443746 out.go:179] * [pause-197523] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:29:39.307246  443746 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:29:39.307310  443746 notify.go:221] Checking for updates...
	I1101 10:29:39.314724  443746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:29:39.317237  443746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:29:39.320231  443746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:29:39.323195  443746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:29:39.326179  443746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:29:39.329561  443746 config.go:182] Loaded profile config "pause-197523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:29:39.330173  443746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:29:39.372017  443746 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:29:39.372126  443746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:29:39.477474  443746 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:64 SystemTime:2025-11-01 10:29:39.467260743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:29:39.477581  443746 docker.go:319] overlay module found
	I1101 10:29:39.480883  443746 out.go:179] * Using the docker driver based on existing profile
	I1101 10:29:39.483761  443746 start.go:309] selected driver: docker
	I1101 10:29:39.483782  443746 start.go:930] validating driver "docker" against &{Name:pause-197523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-197523 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:29:39.483920  443746 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:29:39.484028  443746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:29:39.539079  443746 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:64 SystemTime:2025-11-01 10:29:39.529987551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:29:39.539491  443746 cni.go:84] Creating CNI manager for ""
	I1101 10:29:39.539561  443746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:29:39.539609  443746 start.go:353] cluster config:
	{Name:pause-197523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-197523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:29:39.542765  443746 out.go:179] * Starting "pause-197523" primary control-plane node in "pause-197523" cluster
	I1101 10:29:39.545612  443746 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:29:39.548663  443746 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:29:39.551572  443746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:29:39.551638  443746 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:29:39.551648  443746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:29:39.551653  443746 cache.go:59] Caching tarball of preloaded images
	I1101 10:29:39.552023  443746 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:29:39.552034  443746 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:29:39.552167  443746 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/config.json ...
	I1101 10:29:39.571820  443746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:29:39.571846  443746 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:29:39.571867  443746 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:29:39.571893  443746 start.go:360] acquireMachinesLock for pause-197523: {Name:mk6d808ea7a56f48373318480031d6f0811b7ed9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:29:39.571955  443746 start.go:364] duration metric: took 37.777µs to acquireMachinesLock for "pause-197523"
	I1101 10:29:39.571979  443746 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:29:39.571985  443746 fix.go:54] fixHost starting: 
	I1101 10:29:39.572275  443746 cli_runner.go:164] Run: docker container inspect pause-197523 --format={{.State.Status}}
	I1101 10:29:39.588810  443746 fix.go:112] recreateIfNeeded on pause-197523: state=Running err=<nil>
	W1101 10:29:39.588847  443746 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:29:37.916677  442711 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-854151:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.455129833s)
	I1101 10:29:37.916707  442711 kic.go:203] duration metric: took 4.455279184s to extract preloaded images to volume ...
	W1101 10:29:37.916853  442711 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:29:37.916968  442711 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:29:37.975944  442711 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-854151 --name force-systemd-flag-854151 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-854151 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-854151 --network force-systemd-flag-854151 --ip 192.168.85.2 --volume force-systemd-flag-854151:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:29:38.291651  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Running}}
	I1101 10:29:38.318359  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:29:38.338802  442711 cli_runner.go:164] Run: docker exec force-systemd-flag-854151 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:29:38.400320  442711 oci.go:144] the created container "force-systemd-flag-854151" has a running status.
	I1101 10:29:38.400348  442711 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa...
	I1101 10:29:38.546138  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 10:29:38.546197  442711 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:29:38.576195  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:29:38.597863  442711 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:29:38.597887  442711 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-854151 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:29:38.663366  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:29:38.681001  442711 machine.go:94] provisionDockerMachine start ...
	I1101 10:29:38.681096  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:38.711402  442711 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:38.711746  442711 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1101 10:29:38.711755  442711 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:29:38.712536  442711 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39260->127.0.0.1:33389: read: connection reset by peer
	I1101 10:29:41.865391  442711 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-854151
	
	I1101 10:29:41.865415  442711 ubuntu.go:182] provisioning hostname "force-systemd-flag-854151"
	I1101 10:29:41.865486  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:41.882869  442711 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:41.883186  442711 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1101 10:29:41.883203  442711 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-854151 && echo "force-systemd-flag-854151" | sudo tee /etc/hostname
	I1101 10:29:42.049255  442711 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-854151
	
	I1101 10:29:42.049337  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:42.073512  442711 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:42.073891  442711 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1101 10:29:42.073916  442711 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-854151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-854151/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-854151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:29:42.235094  442711 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:29:42.235146  442711 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:29:42.235179  442711 ubuntu.go:190] setting up certificates
	I1101 10:29:42.235190  442711 provision.go:84] configureAuth start
	I1101 10:29:42.235266  442711 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-854151
	I1101 10:29:42.254380  442711 provision.go:143] copyHostCerts
	I1101 10:29:42.254433  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:29:42.254485  442711 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:29:42.254499  442711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:29:42.254582  442711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:29:42.254674  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:29:42.254698  442711 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:29:42.254703  442711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:29:42.254733  442711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:29:42.254783  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:29:42.254804  442711 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:29:42.254808  442711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:29:42.254835  442711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:29:42.254894  442711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-854151 san=[127.0.0.1 192.168.85.2 force-systemd-flag-854151 localhost minikube]
	I1101 10:29:39.592263  443746 out.go:252] * Updating the running docker "pause-197523" container ...
	I1101 10:29:39.592298  443746 machine.go:94] provisionDockerMachine start ...
	I1101 10:29:39.592390  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:39.609364  443746 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:39.609723  443746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33384 <nil> <nil>}
	I1101 10:29:39.609738  443746 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:29:39.765116  443746 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-197523
	
	I1101 10:29:39.765183  443746 ubuntu.go:182] provisioning hostname "pause-197523"
	I1101 10:29:39.765283  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:39.782611  443746 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:39.782942  443746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33384 <nil> <nil>}
	I1101 10:29:39.782958  443746 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-197523 && echo "pause-197523" | sudo tee /etc/hostname
	I1101 10:29:39.943373  443746 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-197523
	
	I1101 10:29:39.943466  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:39.961999  443746 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:39.962307  443746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33384 <nil> <nil>}
	I1101 10:29:39.962327  443746 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-197523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-197523/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-197523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:29:40.118723  443746 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:29:40.118791  443746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:29:40.118814  443746 ubuntu.go:190] setting up certificates
	I1101 10:29:40.118835  443746 provision.go:84] configureAuth start
	I1101 10:29:40.118896  443746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-197523
	I1101 10:29:40.137285  443746 provision.go:143] copyHostCerts
	I1101 10:29:40.137361  443746 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:29:40.137378  443746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:29:40.137458  443746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:29:40.137608  443746 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:29:40.137619  443746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:29:40.137650  443746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:29:40.137882  443746 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:29:40.137897  443746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:29:40.137929  443746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:29:40.137990  443746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.pause-197523 san=[127.0.0.1 192.168.76.2 localhost minikube pause-197523]
	I1101 10:29:40.351844  443746 provision.go:177] copyRemoteCerts
	I1101 10:29:40.351917  443746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:29:40.351962  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:40.371563  443746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:29:40.477431  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:29:40.494732  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:29:40.512178  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:29:40.530189  443746 provision.go:87] duration metric: took 411.339674ms to configureAuth
	I1101 10:29:40.530216  443746 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:29:40.530446  443746 config.go:182] Loaded profile config "pause-197523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:29:40.530552  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:40.547402  443746 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:40.547724  443746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33384 <nil> <nil>}
	I1101 10:29:40.547744  443746 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:29:42.876451  442711 provision.go:177] copyRemoteCerts
	I1101 10:29:42.876528  442711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:29:42.876578  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:42.893293  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:29:43.002383  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 10:29:43.002480  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:29:43.022432  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 10:29:43.022495  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 10:29:43.041115  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 10:29:43.041178  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:29:43.060264  442711 provision.go:87] duration metric: took 825.053849ms to configureAuth
	I1101 10:29:43.060290  442711 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:29:43.060487  442711 config.go:182] Loaded profile config "force-systemd-flag-854151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:29:43.060611  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:43.078415  442711 main.go:143] libmachine: Using SSH client type: native
	I1101 10:29:43.078734  442711 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1101 10:29:43.078755  442711 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:29:43.339966  442711 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:29:43.339990  442711 machine.go:97] duration metric: took 4.658971657s to provisionDockerMachine
	I1101 10:29:43.340000  442711 client.go:176] duration metric: took 10.583806957s to LocalClient.Create
	I1101 10:29:43.340068  442711 start.go:167] duration metric: took 10.583870408s to libmachine.API.Create "force-systemd-flag-854151"
	I1101 10:29:43.340085  442711 start.go:293] postStartSetup for "force-systemd-flag-854151" (driver="docker")
	I1101 10:29:43.340109  442711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:29:43.340187  442711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:29:43.340265  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:43.357951  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:29:43.462589  442711 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:29:43.466232  442711 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:29:43.466263  442711 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:29:43.466275  442711 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:29:43.466329  442711 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:29:43.466410  442711 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:29:43.466428  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /etc/ssl/certs/2871352.pem
	I1101 10:29:43.466529  442711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:29:43.474125  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:29:43.491612  442711 start.go:296] duration metric: took 151.513037ms for postStartSetup
	I1101 10:29:43.492004  442711 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-854151
	I1101 10:29:43.509227  442711 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/config.json ...
	I1101 10:29:43.509515  442711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:29:43.509573  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:43.526142  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:29:43.626944  442711 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:29:43.632151  442711 start.go:128] duration metric: took 10.87932789s to createHost
	I1101 10:29:43.632188  442711 start.go:83] releasing machines lock for "force-systemd-flag-854151", held for 10.879460264s
	I1101 10:29:43.632258  442711 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-854151
	I1101 10:29:43.649477  442711 ssh_runner.go:195] Run: cat /version.json
	I1101 10:29:43.649499  442711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:29:43.649535  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:43.649554  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:29:43.669801  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:29:43.684194  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:29:43.870773  442711 ssh_runner.go:195] Run: systemctl --version
	I1101 10:29:43.877209  442711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:29:43.916414  442711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:29:43.920717  442711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:29:43.920835  442711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:29:43.949634  442711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:29:43.949674  442711 start.go:496] detecting cgroup driver to use...
	I1101 10:29:43.949686  442711 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1101 10:29:43.949781  442711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:29:43.969121  442711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:29:43.982277  442711 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:29:43.982397  442711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:29:44.007675  442711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:29:44.029438  442711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:29:44.142715  442711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:29:44.269881  442711 docker.go:234] disabling docker service ...
	I1101 10:29:44.269952  442711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:29:44.291329  442711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:29:44.304908  442711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:29:44.426321  442711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:29:44.544052  442711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:29:44.558529  442711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:29:44.572257  442711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:29:44.572332  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.581674  442711 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:29:44.581878  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.591952  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.601379  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.611535  442711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:29:44.620044  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.629319  442711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.643538  442711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:44.652743  442711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:29:44.660463  442711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:29:44.668203  442711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:29:44.788361  442711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:29:44.912902  442711 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:29:44.913016  442711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:29:44.916922  442711 start.go:564] Will wait 60s for crictl version
	I1101 10:29:44.917030  442711 ssh_runner.go:195] Run: which crictl
	I1101 10:29:44.920582  442711 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:29:44.953266  442711 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:29:44.953408  442711 ssh_runner.go:195] Run: crio --version
	I1101 10:29:44.983480  442711 ssh_runner.go:195] Run: crio --version
	I1101 10:29:45.035436  442711 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:29:45.038843  442711 cli_runner.go:164] Run: docker network inspect force-systemd-flag-854151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:29:45.072973  442711 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:29:45.078518  442711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:29:45.096922  442711 kubeadm.go:884] updating cluster {Name:force-systemd-flag-854151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-854151 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:29:45.097064  442711 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:29:45.097137  442711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:29:45.176826  442711 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:29:45.176861  442711 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:29:45.176934  442711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:29:45.210205  442711 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:29:45.210233  442711 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:29:45.210243  442711 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:29:45.210343  442711 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-854151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-854151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:29:45.210443  442711 ssh_runner.go:195] Run: crio config
	I1101 10:29:45.297523  442711 cni.go:84] Creating CNI manager for ""
	I1101 10:29:45.297559  442711 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:29:45.297583  442711 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:29:45.297611  442711 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-854151 NodeName:force-systemd-flag-854151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:29:45.297877  442711 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-854151"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:29:45.297963  442711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:29:45.310976  442711 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:29:45.311117  442711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:29:45.324828  442711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1101 10:29:45.346687  442711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:29:45.365935  442711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1101 10:29:45.382906  442711 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:29:45.387517  442711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:29:45.402298  442711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:29:45.528628  442711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:29:45.546603  442711 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151 for IP: 192.168.85.2
	I1101 10:29:45.546667  442711 certs.go:195] generating shared ca certs ...
	I1101 10:29:45.546698  442711 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:45.546883  442711 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:29:45.546963  442711 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:29:45.547000  442711 certs.go:257] generating profile certs ...
	I1101 10:29:45.547102  442711 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key
	I1101 10:29:45.547135  442711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt with IP's: []
	I1101 10:29:46.415000  442711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt ...
	I1101 10:29:46.415090  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt: {Name:mkc1fd22bd54e1c2f89bde43293a11f96082f927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:46.415311  442711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key ...
	I1101 10:29:46.415358  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key: {Name:mkadb2d3b4c28a88bdcf3ae0e69b45b7c9bcb4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:46.415523  442711 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key.f180a540
	I1101 10:29:46.415580  442711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt.f180a540 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:29:45.911365  443746 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:29:45.911392  443746 machine.go:97] duration metric: took 6.319082175s to provisionDockerMachine
	I1101 10:29:45.911403  443746 start.go:293] postStartSetup for "pause-197523" (driver="docker")
	I1101 10:29:45.911414  443746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:29:45.911489  443746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:29:45.911536  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:45.932890  443746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:29:46.039309  443746 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:29:46.043834  443746 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:29:46.043860  443746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:29:46.043871  443746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:29:46.043922  443746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:29:46.044003  443746 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:29:46.044104  443746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:29:46.053260  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:29:46.079847  443746 start.go:296] duration metric: took 168.427247ms for postStartSetup
	I1101 10:29:46.079977  443746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:29:46.080048  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:46.100120  443746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:29:46.207863  443746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:29:46.213630  443746 fix.go:56] duration metric: took 6.641637966s for fixHost
	I1101 10:29:46.213651  443746 start.go:83] releasing machines lock for "pause-197523", held for 6.641683202s
	I1101 10:29:46.213742  443746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-197523
	I1101 10:29:46.235414  443746 ssh_runner.go:195] Run: cat /version.json
	I1101 10:29:46.235488  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:46.235749  443746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:29:46.235807  443746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-197523
	I1101 10:29:46.263011  443746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:29:46.279810  443746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/pause-197523/id_rsa Username:docker}
	I1101 10:29:46.370106  443746 ssh_runner.go:195] Run: systemctl --version
	I1101 10:29:46.470997  443746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:29:46.549572  443746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:29:46.554936  443746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:29:46.555002  443746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:29:46.563922  443746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:29:46.563946  443746 start.go:496] detecting cgroup driver to use...
	I1101 10:29:46.563977  443746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:29:46.564026  443746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:29:46.581139  443746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:29:46.595884  443746 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:29:46.595951  443746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:29:46.612471  443746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:29:46.627873  443746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:29:46.799935  443746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:29:46.968841  443746 docker.go:234] disabling docker service ...
	I1101 10:29:46.968907  443746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:29:46.987033  443746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:29:47.001822  443746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:29:47.198200  443746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:29:47.419139  443746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:29:47.437343  443746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:29:47.452499  443746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:29:47.452591  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.461857  443746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:29:47.461925  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.471596  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.484341  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.492956  443746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:29:47.501276  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.511009  443746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.520232  443746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:29:47.529905  443746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:29:47.538067  443746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:29:47.545977  443746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:29:47.716024  443746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:29:47.940930  443746 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:29:47.941007  443746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:29:47.945680  443746 start.go:564] Will wait 60s for crictl version
	I1101 10:29:47.945756  443746 ssh_runner.go:195] Run: which crictl
	I1101 10:29:47.949590  443746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:29:48.001939  443746 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:29:48.002033  443746 ssh_runner.go:195] Run: crio --version
	I1101 10:29:48.050079  443746 ssh_runner.go:195] Run: crio --version
	I1101 10:29:48.090055  443746 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:29:48.092953  443746 cli_runner.go:164] Run: docker network inspect pause-197523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:29:48.126254  443746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:29:48.130464  443746 kubeadm.go:884] updating cluster {Name:pause-197523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-197523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:29:48.130598  443746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:29:48.130653  443746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:29:48.179952  443746 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:29:48.179973  443746 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:29:48.180027  443746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:29:48.208495  443746 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:29:48.208516  443746 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:29:48.208524  443746 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:29:48.208626  443746 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-197523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-197523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:29:48.208706  443746 ssh_runner.go:195] Run: crio config
	I1101 10:29:48.285829  443746 cni.go:84] Creating CNI manager for ""
	I1101 10:29:48.285903  443746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:29:48.285944  443746 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:29:48.285997  443746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-197523 NodeName:pause-197523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:29:48.286171  443746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-197523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:29:48.286285  443746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:29:48.296012  443746 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:29:48.296089  443746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:29:48.305288  443746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1101 10:29:48.319684  443746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:29:48.337968  443746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 10:29:48.353295  443746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:29:48.358301  443746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:29:48.534653  443746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:29:48.549671  443746 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523 for IP: 192.168.76.2
	I1101 10:29:48.549760  443746 certs.go:195] generating shared ca certs ...
	I1101 10:29:48.549778  443746 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:48.549943  443746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:29:48.549987  443746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:29:48.549995  443746 certs.go:257] generating profile certs ...
	I1101 10:29:48.550082  443746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.key
	I1101 10:29:48.550148  443746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/apiserver.key.a1c74574
	I1101 10:29:48.550185  443746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/proxy-client.key
	I1101 10:29:48.550302  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:29:48.550332  443746 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:29:48.550339  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:29:48.550360  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:29:48.550384  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:29:48.550404  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:29:48.550444  443746 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:29:48.551070  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:29:48.573517  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:29:48.591752  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:29:48.609794  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:29:48.629344  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:29:48.650757  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:29:48.671722  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:29:48.692018  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:29:48.730239  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:29:48.753477  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:29:48.772344  443746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:29:48.792226  443746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:29:48.807654  443746 ssh_runner.go:195] Run: openssl version
	I1101 10:29:48.816890  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:29:48.826818  443746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:48.832172  443746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:48.832232  443746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:48.880129  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:29:48.890334  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:29:48.900309  443746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:29:48.906320  443746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:29:48.906384  443746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:29:48.958535  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:29:48.968929  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:29:48.978795  443746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:29:48.984672  443746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:29:48.984750  443746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:29:49.037943  443746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:29:49.047339  443746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:29:49.052064  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:29:49.096015  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:29:49.146968  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:29:49.244234  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:29:47.667603  442711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt.f180a540 ...
	I1101 10:29:47.667635  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt.f180a540: {Name:mk6b2a37132a3498f5409f04d7b4bb0504bcfda7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:47.667814  442711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key.f180a540 ...
	I1101 10:29:47.667831  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key.f180a540: {Name:mk6f2a0957589eaddbdc91953416f9f6e758d1c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:47.667910  442711 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt.f180a540 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt
	I1101 10:29:47.667992  442711 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key.f180a540 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key
	I1101 10:29:47.668054  442711 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.key
	I1101 10:29:47.668074  442711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.crt with IP's: []
	I1101 10:29:48.804318  442711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.crt ...
	I1101 10:29:48.804381  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.crt: {Name:mk6ca4f6e33ae9c047ccace6f2532c90c868494b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:48.805226  442711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.key ...
	I1101 10:29:48.805244  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.key: {Name:mk08c15e6950c6bbc5bd8dc874b1a85a31f5ecb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:48.805898  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 10:29:48.805925  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 10:29:48.805938  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 10:29:48.805950  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 10:29:48.805961  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 10:29:48.805974  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 10:29:48.805985  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 10:29:48.805995  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 10:29:48.806044  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:29:48.806077  442711 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:29:48.806086  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:29:48.806114  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:29:48.806138  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:29:48.806159  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:29:48.806200  442711 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:29:48.806227  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:48.806241  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem -> /usr/share/ca-certificates/287135.pem
	I1101 10:29:48.806251  442711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> /usr/share/ca-certificates/2871352.pem
	I1101 10:29:48.806775  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:29:48.828836  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:29:48.852485  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:29:48.869671  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:29:48.888399  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 10:29:48.909534  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:29:48.929855  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:29:48.947605  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:29:48.966763  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:29:48.988016  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:29:49.007829  442711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:29:49.026079  442711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:29:49.039360  442711 ssh_runner.go:195] Run: openssl version
	I1101 10:29:49.046656  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:29:49.055916  442711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:49.060301  442711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:49.060365  442711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:29:49.102342  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:29:49.111430  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:29:49.119511  442711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:29:49.123636  442711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:29:49.123716  442711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:29:49.187161  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:29:49.197534  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:29:49.213678  442711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:29:49.223199  442711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:29:49.223260  442711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:29:49.278920  442711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:29:49.288139  442711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:29:49.293165  442711 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:29:49.293229  442711 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-854151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-854151 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:29:49.293304  442711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:29:49.293361  442711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:29:49.332016  442711 cri.go:89] found id: ""
	I1101 10:29:49.332085  442711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:29:49.341906  442711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:29:49.349886  442711 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:29:49.349957  442711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:29:49.362219  442711 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:29:49.362239  442711 kubeadm.go:158] found existing configuration files:
	
	I1101 10:29:49.362291  442711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:29:49.377004  442711 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:29:49.377067  442711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:29:49.387264  442711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:29:49.409584  442711 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:29:49.409646  442711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:29:49.429348  442711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:29:49.449055  442711 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:29:49.449121  442711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:29:49.466844  442711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:29:49.491371  442711 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:29:49.491445  442711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:29:49.506874  442711 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:29:49.574695  442711 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:29:49.574864  442711 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:29:49.618510  442711 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:29:49.618591  442711 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:29:49.618633  442711 kubeadm.go:319] OS: Linux
	I1101 10:29:49.618685  442711 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:29:49.618739  442711 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:29:49.618794  442711 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:29:49.618850  442711 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:29:49.618904  442711 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:29:49.618974  442711 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:29:49.619027  442711 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:29:49.619082  442711 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:29:49.619133  442711 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:29:49.726271  442711 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:29:49.726414  442711 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:29:49.726537  442711 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:29:49.739233  442711 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:29:49.360802  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:29:49.566814  443746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:29:49.714500  443746 kubeadm.go:401] StartCluster: {Name:pause-197523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-197523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:29:49.714647  443746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:29:49.714878  443746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:29:49.790509  443746 cri.go:89] found id: "4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a"
	I1101 10:29:49.790527  443746 cri.go:89] found id: "3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84"
	I1101 10:29:49.790532  443746 cri.go:89] found id: "6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a"
	I1101 10:29:49.790535  443746 cri.go:89] found id: "c46b8aaeffa0082e965926a54cd85d2e052f19357bd88395e1bc98be5fa281f6"
	I1101 10:29:49.790538  443746 cri.go:89] found id: "87b9897087e6aaa64c721ab5ef446d1366a01bc265a5a4b3cdb2f51049e586ed"
	I1101 10:29:49.790542  443746 cri.go:89] found id: "d28a5938aa1092bb3305ae498633bf03b37fe8e68dcfe4b02fc20e42488fa9e4"
	I1101 10:29:49.790545  443746 cri.go:89] found id: "b76464b1416c8abe45c0967675f8a27c2908d2e8954a5595fd5cb5ed2329b506"
	I1101 10:29:49.790548  443746 cri.go:89] found id: "99e565cbd3b72a17fc891167c8a103997c60c46e217825056d511a99adc06362"
	I1101 10:29:49.790551  443746 cri.go:89] found id: "da788d7cea8ef8b74ba9aeddc734c4a58a0f8c301196a24317a0eebde5147eb2"
	I1101 10:29:49.790558  443746 cri.go:89] found id: "6c5a2fe54c508b435413ed345062b1d2aa084495afa6dda84e231a17054c1e31"
	I1101 10:29:49.790562  443746 cri.go:89] found id: "44db24a24cd979ca63b954e45e8c420af6e0dcf26da14d8102f7a645f5ef8c01"
	I1101 10:29:49.790565  443746 cri.go:89] found id: "7149d740a36107a476b99d86dc97bfbc2aa105f71c9a1ca2d72cc7dc8b2a5447"
	I1101 10:29:49.790568  443746 cri.go:89] found id: "4742f77b740db06e44bd84780999256c66d075efa0d5a0ffb535c8d55a421cf3"
	I1101 10:29:49.790571  443746 cri.go:89] found id: ""
	I1101 10:29:49.790767  443746 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:29:49.819428  443746 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:29:49Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:29:49.819527  443746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:29:49.851328  443746 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:29:49.851455  443746 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:29:49.851549  443746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:29:49.875148  443746 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:29:49.875850  443746 kubeconfig.go:125] found "pause-197523" server: "https://192.168.76.2:8443"
	I1101 10:29:49.876597  443746 kapi.go:59] client config for pause-197523: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:29:49.877265  443746 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:29:49.877394  443746 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:29:49.877417  443746 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:29:49.877458  443746 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:29:49.877480  443746 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:29:49.877999  443746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:29:49.900062  443746 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:29:49.900148  443746 kubeadm.go:602] duration metric: took 48.671161ms to restartPrimaryControlPlane
	I1101 10:29:49.900172  443746 kubeadm.go:403] duration metric: took 185.682293ms to StartCluster
	I1101 10:29:49.900215  443746 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:49.900314  443746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:29:49.901116  443746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:29:49.901435  443746 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:29:49.901910  443746 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:29:49.901996  443746 config.go:182] Loaded profile config "pause-197523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:29:49.905224  443746 out.go:179] * Verifying Kubernetes components...
	I1101 10:29:49.905308  443746 out.go:179] * Enabled addons: 
	I1101 10:29:49.745240  442711 out.go:252]   - Generating certificates and keys ...
	I1101 10:29:49.745354  442711 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:29:49.745438  442711 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:29:50.699952  442711 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:29:52.287258  442711 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:29:49.908080  443746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:29:49.908222  443746 addons.go:515] duration metric: took 6.30914ms for enable addons: enabled=[]
	I1101 10:29:50.251974  443746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:29:50.331535  443746 node_ready.go:35] waiting up to 6m0s for node "pause-197523" to be "Ready" ...
	I1101 10:29:52.578064  442711 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:29:52.906430  442711 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:29:52.996306  442711 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:29:52.996785  442711 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-854151 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:29:53.422703  442711 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:29:53.423071  442711 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-854151 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:29:54.577444  442711 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:29:54.821228  442711 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:29:55.278104  442711 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:29:55.278179  442711 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:29:55.539073  442711 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:29:56.377624  442711 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:29:57.746817  442711 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:29:58.321121  442711 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:29:58.960209  442711 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:29:58.961117  442711 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:29:58.964185  442711 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:29:56.021235  443746 node_ready.go:49] node "pause-197523" is "Ready"
	I1101 10:29:56.021315  443746 node_ready.go:38] duration metric: took 5.689748567s for node "pause-197523" to be "Ready" ...
	I1101 10:29:56.021343  443746 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:29:56.021433  443746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:29:56.051359  443746 api_server.go:72] duration metric: took 6.149856016s to wait for apiserver process to appear ...
	I1101 10:29:56.051436  443746 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:29:56.051471  443746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:29:56.148066  443746 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:29:56.148147  443746 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:29:56.551585  443746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:29:56.563849  443746 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:29:56.563877  443746 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:29:57.052035  443746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:29:57.062454  443746 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:29:57.062487  443746 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:29:57.552135  443746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:29:57.562571  443746 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:29:57.566743  443746 api_server.go:141] control plane version: v1.34.1
	I1101 10:29:57.566773  443746 api_server.go:131] duration metric: took 1.515316746s to wait for apiserver health ...
	I1101 10:29:57.566783  443746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:29:57.583544  443746 system_pods.go:59] 7 kube-system pods found
	I1101 10:29:57.583591  443746 system_pods.go:61] "coredns-66bc5c9577-svwdl" [bbc67d74-e6c7-40ab-a5d7-6677d46431af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:29:57.583600  443746 system_pods.go:61] "etcd-pause-197523" [9e3f44e6-6d0a-4684-a5f9-a0d0ad8ad738] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:29:57.583607  443746 system_pods.go:61] "kindnet-jhdpd" [79caf352-bf51-4b51-b25b-b7a3daf6cd52] Running
	I1101 10:29:57.583615  443746 system_pods.go:61] "kube-apiserver-pause-197523" [c88a0cc5-db1c-4467-a711-53f1289ebe04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:29:57.583624  443746 system_pods.go:61] "kube-controller-manager-pause-197523" [30e49b56-a708-4899-956f-f16d86f3ad93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:29:57.583634  443746 system_pods.go:61] "kube-proxy-mwwgw" [728cdaf0-253c-46c6-83e3-5cb2e800e24f] Running
	I1101 10:29:57.583641  443746 system_pods.go:61] "kube-scheduler-pause-197523" [bbe10c2f-8f5d-4566-a431-7cb64304c2fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:29:57.583654  443746 system_pods.go:74] duration metric: took 16.86552ms to wait for pod list to return data ...
	I1101 10:29:57.583663  443746 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:29:57.588574  443746 default_sa.go:45] found service account: "default"
	I1101 10:29:57.588598  443746 default_sa.go:55] duration metric: took 4.924023ms for default service account to be created ...
	I1101 10:29:57.588607  443746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:29:57.595781  443746 system_pods.go:86] 7 kube-system pods found
	I1101 10:29:57.595818  443746 system_pods.go:89] "coredns-66bc5c9577-svwdl" [bbc67d74-e6c7-40ab-a5d7-6677d46431af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:29:57.595828  443746 system_pods.go:89] "etcd-pause-197523" [9e3f44e6-6d0a-4684-a5f9-a0d0ad8ad738] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:29:57.595834  443746 system_pods.go:89] "kindnet-jhdpd" [79caf352-bf51-4b51-b25b-b7a3daf6cd52] Running
	I1101 10:29:57.595841  443746 system_pods.go:89] "kube-apiserver-pause-197523" [c88a0cc5-db1c-4467-a711-53f1289ebe04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:29:57.595847  443746 system_pods.go:89] "kube-controller-manager-pause-197523" [30e49b56-a708-4899-956f-f16d86f3ad93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:29:57.595853  443746 system_pods.go:89] "kube-proxy-mwwgw" [728cdaf0-253c-46c6-83e3-5cb2e800e24f] Running
	I1101 10:29:57.595859  443746 system_pods.go:89] "kube-scheduler-pause-197523" [bbe10c2f-8f5d-4566-a431-7cb64304c2fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:29:57.595865  443746 system_pods.go:126] duration metric: took 7.25348ms to wait for k8s-apps to be running ...
	I1101 10:29:57.595880  443746 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:29:57.595932  443746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:29:57.650786  443746 system_svc.go:56] duration metric: took 54.895688ms WaitForService to wait for kubelet
	I1101 10:29:57.650866  443746 kubeadm.go:587] duration metric: took 7.749367254s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:29:57.650900  443746 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:29:57.657429  443746 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:29:57.657510  443746 node_conditions.go:123] node cpu capacity is 2
	I1101 10:29:57.657537  443746 node_conditions.go:105] duration metric: took 6.614234ms to run NodePressure ...
	I1101 10:29:57.657561  443746 start.go:242] waiting for startup goroutines ...
	I1101 10:29:57.657598  443746 start.go:247] waiting for cluster config update ...
	I1101 10:29:57.657625  443746 start.go:256] writing updated cluster config ...
	I1101 10:29:57.658038  443746 ssh_runner.go:195] Run: rm -f paused
	I1101 10:29:57.662461  443746 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:29:57.663113  443746 kapi.go:59] client config for pause-197523: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:29:57.667140  443746 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-svwdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:29:58.967695  442711 out.go:252]   - Booting up control plane ...
	I1101 10:29:58.967802  442711 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:29:58.968172  442711 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:29:58.969485  442711 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:29:58.987860  442711 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:29:58.987972  442711 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:29:58.995914  442711 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:29:58.996018  442711 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:29:58.996358  442711 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:29:59.204438  442711 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:29:59.204563  442711 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:30:00.220316  442711 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003408899s
	I1101 10:30:00.220433  442711 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:30:00.220519  442711 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:30:00.220613  442711 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:30:00.220695  442711 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 10:29:59.676559  443746 pod_ready.go:104] pod "coredns-66bc5c9577-svwdl" is not "Ready", error: <nil>
	I1101 10:30:01.673247  443746 pod_ready.go:94] pod "coredns-66bc5c9577-svwdl" is "Ready"
	I1101 10:30:01.673289  443746 pod_ready.go:86] duration metric: took 4.006079328s for pod "coredns-66bc5c9577-svwdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:01.675791  443746 pod_ready.go:83] waiting for pod "etcd-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:03.181815  443746 pod_ready.go:94] pod "etcd-pause-197523" is "Ready"
	I1101 10:30:03.181844  443746 pod_ready.go:86] duration metric: took 1.506023853s for pod "etcd-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:03.184809  443746 pod_ready.go:83] waiting for pod "kube-apiserver-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:03.189765  443746 pod_ready.go:94] pod "kube-apiserver-pause-197523" is "Ready"
	I1101 10:30:03.189802  443746 pod_ready.go:86] duration metric: took 4.966166ms for pod "kube-apiserver-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:03.192204  443746 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:30:05.200698  443746 pod_ready.go:104] pod "kube-controller-manager-pause-197523" is not "Ready", error: <nil>
	I1101 10:30:06.197504  443746 pod_ready.go:94] pod "kube-controller-manager-pause-197523" is "Ready"
	I1101 10:30:06.197541  443746 pod_ready.go:86] duration metric: took 3.005312449s for pod "kube-controller-manager-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:06.199795  443746 pod_ready.go:83] waiting for pod "kube-proxy-mwwgw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:06.205403  443746 pod_ready.go:94] pod "kube-proxy-mwwgw" is "Ready"
	I1101 10:30:06.205433  443746 pod_ready.go:86] duration metric: took 5.61035ms for pod "kube-proxy-mwwgw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:06.271118  443746 pod_ready.go:83] waiting for pod "kube-scheduler-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:06.670509  443746 pod_ready.go:94] pod "kube-scheduler-pause-197523" is "Ready"
	I1101 10:30:06.670536  443746 pod_ready.go:86] duration metric: took 399.387794ms for pod "kube-scheduler-pause-197523" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:30:06.670549  443746 pod_ready.go:40] duration metric: took 9.008005361s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:30:06.740500  443746 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:30:06.743628  443746 out.go:179] * Done! kubectl is now configured to use "pause-197523" cluster and "default" namespace by default
	I1101 10:30:03.638128  442711 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.423881053s
	I1101 10:30:05.208802  442711 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.996964171s
	I1101 10:30:06.717926  442711 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.504658869s
	I1101 10:30:06.743244  442711 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:30:06.806947  442711 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:30:06.827529  442711 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:30:06.827743  442711 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-flag-854151 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:30:06.853335  442711 kubeadm.go:319] [bootstrap-token] Using token: ayqqn8.oxxi48m2zksj08s4
	I1101 10:30:06.856364  442711 out.go:252]   - Configuring RBAC rules ...
	I1101 10:30:06.856489  442711 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:30:06.869921  442711 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:30:06.884700  442711 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:30:06.890976  442711 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:30:06.897585  442711 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:30:06.905221  442711 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:30:07.129058  442711 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:30:07.653376  442711 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:30:08.129078  442711 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:30:08.130671  442711 kubeadm.go:319] 
	I1101 10:30:08.130759  442711 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:30:08.130772  442711 kubeadm.go:319] 
	I1101 10:30:08.130854  442711 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:30:08.130864  442711 kubeadm.go:319] 
	I1101 10:30:08.130891  442711 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:30:08.130957  442711 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:30:08.131013  442711 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:30:08.131022  442711 kubeadm.go:319] 
	I1101 10:30:08.131079  442711 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:30:08.131088  442711 kubeadm.go:319] 
	I1101 10:30:08.131139  442711 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:30:08.131148  442711 kubeadm.go:319] 
	I1101 10:30:08.131203  442711 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:30:08.131296  442711 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:30:08.131375  442711 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:30:08.131386  442711 kubeadm.go:319] 
	I1101 10:30:08.131475  442711 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:30:08.131559  442711 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:30:08.131567  442711 kubeadm.go:319] 
	I1101 10:30:08.131655  442711 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ayqqn8.oxxi48m2zksj08s4 \
	I1101 10:30:08.131766  442711 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 10:30:08.131794  442711 kubeadm.go:319] 	--control-plane 
	I1101 10:30:08.131803  442711 kubeadm.go:319] 
	I1101 10:30:08.131892  442711 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:30:08.131900  442711 kubeadm.go:319] 
	I1101 10:30:08.131986  442711 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ayqqn8.oxxi48m2zksj08s4 \
	I1101 10:30:08.132096  442711 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 10:30:08.137278  442711 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:30:08.137531  442711 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:30:08.137649  442711 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:30:08.137673  442711 cni.go:84] Creating CNI manager for ""
	I1101 10:30:08.137680  442711 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:30:08.140660  442711 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:30:08.143548  442711 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:30:08.151132  442711 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:30:08.151151  442711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:30:08.171176  442711 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:30:08.491548  442711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:30:08.491707  442711 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-flag-854151 minikube.k8s.io/updated_at=2025_11_01T10_30_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=force-systemd-flag-854151 minikube.k8s.io/primary=true
	I1101 10:30:08.491715  442711 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:30:08.691156  442711 kubeadm.go:1114] duration metric: took 199.511802ms to wait for elevateKubeSystemPrivileges
	I1101 10:30:08.691216  442711 ops.go:34] apiserver oom_adj: -16
	I1101 10:30:08.691225  442711 kubeadm.go:403] duration metric: took 19.398001705s to StartCluster
	I1101 10:30:08.691241  442711 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:30:08.691322  442711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:30:08.692286  442711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:30:08.692527  442711 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:30:08.692541  442711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:30:08.692821  442711 config.go:182] Loaded profile config "force-systemd-flag-854151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:30:08.692866  442711 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:30:08.692942  442711 addons.go:70] Setting storage-provisioner=true in profile "force-systemd-flag-854151"
	I1101 10:30:08.692957  442711 addons.go:239] Setting addon storage-provisioner=true in "force-systemd-flag-854151"
	I1101 10:30:08.692963  442711 addons.go:70] Setting default-storageclass=true in profile "force-systemd-flag-854151"
	I1101 10:30:08.692980  442711 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-854151"
	I1101 10:30:08.692984  442711 host.go:66] Checking if "force-systemd-flag-854151" exists ...
	I1101 10:30:08.693408  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:30:08.693468  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:30:08.698563  442711 out.go:179] * Verifying Kubernetes components...
	I1101 10:30:08.701543  442711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:30:08.726322  442711 kapi.go:59] client config for force-systemd-flag-854151: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:30:08.726861  442711 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:30:08.726875  442711 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:30:08.726881  442711 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:30:08.726886  442711 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:30:08.726890  442711 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:30:08.726941  442711 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1101 10:30:08.727256  442711 addons.go:239] Setting addon default-storageclass=true in "force-systemd-flag-854151"
	I1101 10:30:08.727283  442711 host.go:66] Checking if "force-systemd-flag-854151" exists ...
	I1101 10:30:08.727719  442711 cli_runner.go:164] Run: docker container inspect force-systemd-flag-854151 --format={{.State.Status}}
	I1101 10:30:08.747477  442711 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:30:08.751301  442711 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:30:08.751327  442711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:30:08.751407  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:30:08.769328  442711 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:30:08.769354  442711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:30:08.769415  442711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-854151
	I1101 10:30:08.789845  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:30:08.803442  442711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/force-systemd-flag-854151/id_rsa Username:docker}
	I1101 10:30:09.080797  442711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:30:09.164419  442711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:30:09.171333  442711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:30:09.172813  442711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:30:09.922830  442711 kapi.go:59] client config for force-systemd-flag-854151: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:30:09.923138  442711 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:30:09.923191  442711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:30:09.923301  442711 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:30:09.923858  442711 kapi.go:59] client config for force-systemd-flag-854151: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/profiles/force-systemd-flag-854151/client.key", CAFile:"/home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:30:09.945098  442711 api_server.go:72] duration metric: took 1.252541531s to wait for apiserver process to appear ...
	I1101 10:30:09.945131  442711 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:30:09.945150  442711 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:30:09.982736  442711 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:30:09.987040  442711 api_server.go:141] control plane version: v1.34.1
	I1101 10:30:09.987072  442711 api_server.go:131] duration metric: took 41.933519ms to wait for apiserver health ...
	I1101 10:30:09.987089  442711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:30:09.992601  442711 system_pods.go:59] 5 kube-system pods found
	I1101 10:30:09.992640  442711 system_pods.go:61] "etcd-force-systemd-flag-854151" [330b23ef-a7bc-4bb3-ba65-bce979dd6bfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:30:09.992649  442711 system_pods.go:61] "kube-apiserver-force-systemd-flag-854151" [693e44ff-6368-44fe-9d06-4f678d8ce783] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:30:09.992660  442711 system_pods.go:61] "kube-controller-manager-force-systemd-flag-854151" [b25df811-25d6-4016-b326-5bfd3bd01721] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:30:09.992667  442711 system_pods.go:61] "kube-scheduler-force-systemd-flag-854151" [180a9d7b-4f5b-483e-b2af-6987757486ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:30:09.992672  442711 system_pods.go:61] "storage-provisioner" [13bc52d4-0358-453e-aa75-b9b88936626b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:30:09.992683  442711 system_pods.go:74] duration metric: took 5.589033ms to wait for pod list to return data ...
	I1101 10:30:09.992699  442711 kubeadm.go:587] duration metric: took 1.300149253s to wait for: map[apiserver:true system_pods:true]
	I1101 10:30:09.992716  442711 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:30:09.995785  442711 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:30:09.996008  442711 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:30:09.996033  442711 node_conditions.go:123] node cpu capacity is 2
	I1101 10:30:09.996046  442711 node_conditions.go:105] duration metric: took 3.323865ms to run NodePressure ...
	I1101 10:30:09.996059  442711 start.go:242] waiting for startup goroutines ...
	I1101 10:30:09.998787  442711 addons.go:515] duration metric: took 1.305900322s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:30:10.427470  442711 kapi.go:214] "coredns" deployment in "kube-system" namespace and "force-systemd-flag-854151" context rescaled to 1 replicas
	I1101 10:30:10.427509  442711 start.go:247] waiting for cluster config update ...
	I1101 10:30:10.427523  442711 start.go:256] writing updated cluster config ...
	I1101 10:30:10.427808  442711 ssh_runner.go:195] Run: rm -f paused
	I1101 10:30:10.513390  442711 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:30:10.516664  442711 out.go:179] * Done! kubectl is now configured to use "force-systemd-flag-854151" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.510014719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.510851587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.525023543Z" level=info msg="Starting container: 6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a" id=24e1d850-2dd2-483d-ba90-7ecf4caf0f3e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.565625747Z" level=info msg="Started container" PID=2316 containerID=6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a description=kube-system/kindnet-jhdpd/kindnet-cni id=24e1d850-2dd2-483d-ba90-7ecf4caf0f3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c29766a43ee888fa895df77af04bd1ed5540c3ce52e0c3aa29dfc46e380800e
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.664430076Z" level=info msg="Created container 3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84: kube-system/kube-controller-manager-pause-197523/kube-controller-manager" id=949bd11a-8dee-4621-a96e-7ef4a987674e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.666212761Z" level=info msg="Starting container: 3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84" id=f23c6ce3-2ed8-42bb-8abf-caf36a987655 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.68857146Z" level=info msg="Started container" PID=2350 containerID=3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84 description=kube-system/kube-controller-manager-pause-197523/kube-controller-manager id=f23c6ce3-2ed8-42bb-8abf-caf36a987655 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1fd946064ed5a1c83cbca5eb8fd69f1bca2ebceb23d35dbd8c58e591dd73560e
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.716867962Z" level=info msg="Created container 4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a: kube-system/coredns-66bc5c9577-svwdl/coredns" id=ed0eb5d3-f6ab-48b5-a19a-0e07ea743b9b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.72254626Z" level=info msg="Starting container: 4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a" id=e851a934-8c61-4290-8f14-3bcae5d8fddf name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:29:49 pause-197523 crio[2072]: time="2025-11-01T10:29:49.730073359Z" level=info msg="Started container" PID=2358 containerID=4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a description=kube-system/coredns-66bc5c9577-svwdl/coredns id=e851a934-8c61-4290-8f14-3bcae5d8fddf name=/runtime.v1.RuntimeService/StartContainer sandboxID=34694322e55facff1146ffd185dc11f071d5a82424e041b50f9045fdb95c8009
	Nov 01 10:29:50 pause-197523 crio[2072]: time="2025-11-01T10:29:50.512441407Z" level=info msg="Created container b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7: kube-system/kube-proxy-mwwgw/kube-proxy" id=9e2cb423-ebf6-4904-81e2-8ded1da80323 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:29:50 pause-197523 crio[2072]: time="2025-11-01T10:29:50.514894213Z" level=info msg="Starting container: b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7" id=2125f4e1-14a0-4769-a246-dfd3d6c46a38 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:29:50 pause-197523 crio[2072]: time="2025-11-01T10:29:50.520172004Z" level=info msg="Started container" PID=2334 containerID=b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7 description=kube-system/kube-proxy-mwwgw/kube-proxy id=2125f4e1-14a0-4769-a246-dfd3d6c46a38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f159266330daa9538d84151e49286bc7c50804c4dff500244c952c1e0fa9975
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.077900207Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.154895128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.155115906Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.155218775Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.219352296Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.219550559Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.219664816Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.297992094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.29818673Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.298305714Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.302747435Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:30:00 pause-197523 crio[2072]: time="2025-11-01T10:30:00.302799513Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4b464843f33d1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   34694322e55fa       coredns-66bc5c9577-svwdl               kube-system
	3c3fa591e90f0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   1fd946064ed5a       kube-controller-manager-pause-197523   kube-system
	b85d566999f00       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago       Running             kube-proxy                1                   2f159266330da       kube-proxy-mwwgw                       kube-system
	6f72b51f09b07       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   7c29766a43ee8       kindnet-jhdpd                          kube-system
	c46b8aaeffa00       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   ff7d8c3bf49da       kube-scheduler-pause-197523            kube-system
	87b9897087e6a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   bfac7e8318e0c       etcd-pause-197523                      kube-system
	d28a5938aa109       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   779254e2c3d00       kube-apiserver-pause-197523            kube-system
	b76464b1416c8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   34694322e55fa       coredns-66bc5c9577-svwdl               kube-system
	99e565cbd3b72       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   2f159266330da       kube-proxy-mwwgw                       kube-system
	da788d7cea8ef       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   7c29766a43ee8       kindnet-jhdpd                          kube-system
	6c5a2fe54c508       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   bfac7e8318e0c       etcd-pause-197523                      kube-system
	44db24a24cd97       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   779254e2c3d00       kube-apiserver-pause-197523            kube-system
	7149d740a3610       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   ff7d8c3bf49da       kube-scheduler-pause-197523            kube-system
	4742f77b740db       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   1fd946064ed5a       kube-controller-manager-pause-197523   kube-system
	
	
	==> coredns [4b464843f33d12dfc5388c1c79485e0452ec53fadb8fd7e869e17be49fd4b50a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59001 - 21446 "HINFO IN 2723435776611723228.7685608349230344559. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003934867s
	
	
	==> coredns [b76464b1416c8abe45c0967675f8a27c2908d2e8954a5595fd5cb5ed2329b506] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54012 - 23652 "HINFO IN 5500107722844301064.582281186405622169. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023656971s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-197523
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-197523
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=pause-197523
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_28_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:28:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-197523
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:30:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:29:35 +0000   Sat, 01 Nov 2025 10:28:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:29:35 +0000   Sat, 01 Nov 2025 10:28:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:29:35 +0000   Sat, 01 Nov 2025 10:28:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:29:35 +0000   Sat, 01 Nov 2025 10:29:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-197523
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                8c3055d4-6ef2-4330-b24a-ecab648c0a33
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-svwdl                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     80s
	  kube-system                 etcd-pause-197523                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         88s
	  kube-system                 kindnet-jhdpd                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      81s
	  kube-system                 kube-apiserver-pause-197523             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-197523    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-mwwgw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-scheduler-pause-197523             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Warning  CgroupV1                 94s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  94s (x8 over 94s)  kubelet          Node pause-197523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    94s (x8 over 94s)  kubelet          Node pause-197523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     94s (x8 over 94s)  kubelet          Node pause-197523 status is now: NodeHasSufficientPID
	  Normal   Starting                 86s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 86s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s                kubelet          Node pause-197523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s                kubelet          Node pause-197523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s                kubelet          Node pause-197523 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                node-controller  Node pause-197523 event: Registered Node pause-197523 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-197523 status is now: NodeReady
	  Warning  ContainerGCFailed        26s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           14s                node-controller  Node pause-197523 event: Registered Node pause-197523 in Controller
	
	
	==> dmesg <==
	[  +4.195210] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:58] overlayfs: idmapped layers are currently not supported
	[  +4.848874] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:06] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6c5a2fe54c508b435413ed345062b1d2aa084495afa6dda84e231a17054c1e31] <==
	{"level":"warn","ts":"2025-11-01T10:28:43.201055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.219836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.245909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.270057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.290499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.357552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:28:43.411876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35968","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:29:40.718657Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:29:40.718711Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-197523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:29:40.718799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:29:40.992617Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:29:40.994086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T10:29:40.994129Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:29:40.994177Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:29:40.994186Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:29:40.994164Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-01T10:29:40.994230Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:29:40.994253Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:29:40.994319Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:29:40.994359Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:29:40.994394Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:29:40.997479Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-01T10:29:40.997549Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:29:40.997622Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-01T10:29:40.997668Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-197523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [87b9897087e6aaa64c721ab5ef446d1366a01bc265a5a4b3cdb2f51049e586ed] <==
	{"level":"warn","ts":"2025-11-01T10:29:52.955280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.013820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.054675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.098226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.134106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.181659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.266947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.305105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.405865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.446856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.494584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.550145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.588918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.628419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.675053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.757783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.847925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:53.915995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:54.033957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:54.086545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:29:54.125941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56172","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:29:57.106330Z","caller":"traceutil/trace.go:172","msg":"trace[1639363615] linearizableReadLoop","detail":"{readStateIndex:543; appliedIndex:543; }","duration":"121.856722ms","start":"2025-11-01T10:29:56.984457Z","end":"2025-11-01T10:29:57.106314Z","steps":["trace[1639363615] 'read index received'  (duration: 121.839089ms)","trace[1639363615] 'applied index is now lower than readState.Index'  (duration: 17.108µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:29:57.106500Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.024396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:kube-scheduler\" limit:1 ","response":"range_response_count:1 size:1835"}
	{"level":"info","ts":"2025-11-01T10:29:57.106544Z","caller":"traceutil/trace.go:172","msg":"trace[32956699] range","detail":"{range_begin:/registry/clusterroles/system:kube-scheduler; range_end:; response_count:1; response_revision:519; }","duration":"122.074382ms","start":"2025-11-01T10:29:56.984453Z","end":"2025-11-01T10:29:57.106527Z","steps":["trace[32956699] 'agreement among raft nodes before linearized reading'  (duration: 121.937428ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:29:57.119612Z","caller":"traceutil/trace.go:172","msg":"trace[770968399] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"135.435785ms","start":"2025-11-01T10:29:56.984159Z","end":"2025-11-01T10:29:57.119595Z","steps":["trace[770968399] 'process raft request'  (duration: 122.554774ms)","trace[770968399] 'compare'  (duration: 12.773457ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:30:14 up  2:12,  0 user,  load average: 4.86, 3.69, 2.59
	Linux pause-197523 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6f72b51f09b07a8ad78aae9be350adb7d37a32e97d6263ba1b819a0932d1d59a] <==
	I1101 10:29:49.754074       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:29:49.754346       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:29:49.754505       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:29:49.754518       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:29:49.754533       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:29:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:29:50.065156       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:29:50.089757       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:29:50.089869       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:29:50.095106       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:29:56.194240       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:29:56.194280       1 metrics.go:72] Registering metrics
	I1101 10:29:56.194349       1 controller.go:711] "Syncing nftables rules"
	I1101 10:30:00.077363       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:30:00.077521       1 main.go:301] handling current node
	I1101 10:30:10.064724       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:30:10.064773       1 main.go:301] handling current node
	
	
	==> kindnet [da788d7cea8ef8b74ba9aeddc734c4a58a0f8c301196a24317a0eebde5147eb2] <==
	I1101 10:28:55.221903       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:28:55.222322       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:28:55.222505       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:28:55.222559       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:28:55.222600       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:28:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:28:55.421385       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:28:55.421457       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:28:55.421489       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:28:55.422440       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:29:25.421716       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:29:25.423862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:29:25.424216       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:29:25.424413       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:29:26.822446       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:29:26.822489       1 metrics.go:72] Registering metrics
	I1101 10:29:26.822554       1 controller.go:711] "Syncing nftables rules"
	I1101 10:29:35.427955       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:29:35.428011       1 main.go:301] handling current node
	
	
	==> kube-apiserver [44db24a24cd979ca63b954e45e8c420af6e0dcf26da14d8102f7a645f5ef8c01] <==
	W1101 10:29:40.738307       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.738361       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.738416       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.738478       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.738526       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.739753       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.739809       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.739954       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740015       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740057       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740121       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740160       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740195       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740234       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740274       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.740312       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741061       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741120       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741179       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741241       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741292       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741347       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741446       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741548       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:29:40.741645       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d28a5938aa1092bb3305ae498633bf03b37fe8e68dcfe4b02fc20e42488fa9e4] <==
	I1101 10:29:56.028888       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:29:56.028895       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:29:56.038343       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:29:56.038432       1 policy_source.go:240] refreshing policies
	I1101 10:29:56.052548       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:29:56.055187       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:29:56.075994       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:29:56.076133       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:29:56.076393       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:29:56.082548       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:29:56.103259       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:29:56.110527       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:29:56.117098       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:29:56.119420       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:29:56.120837       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:29:56.120958       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:29:56.131942       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:29:56.132608       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1101 10:29:56.182455       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:29:56.652108       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:29:57.783109       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:29:59.249123       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:29:59.276664       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:29:59.471476       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:29:59.573586       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [3c3fa591e90f052837a39431c047bc2857e77775065dbe8c09b7a3ac419f4f84] <==
	I1101 10:29:59.257110       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:29:59.258629       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:29:59.265345       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:29:59.265558       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:29:59.265476       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:29:59.265921       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:29:59.266005       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:29:59.266400       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-197523"
	I1101 10:29:59.266539       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:29:59.269780       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:29:59.269865       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:29:59.269963       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:29:59.270663       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:29:59.272210       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:29:59.272664       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:29:59.273786       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:29:59.276871       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:29:59.279614       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:29:59.279910       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:29:59.280076       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:29:59.280522       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:29:59.283286       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:29:59.285602       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:29:59.289631       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:29:59.294872       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-controller-manager [4742f77b740db06e44bd84780999256c66d075efa0d5a0ffb535c8d55a421cf3] <==
	I1101 10:28:52.528875       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:28:52.529090       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:28:52.536385       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-197523" podCIDRs=["10.244.0.0/24"]
	I1101 10:28:52.539501       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:28:52.539706       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:28:52.542971       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:28:52.543090       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:28:52.543102       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:28:52.543111       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:28:52.543120       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:28:52.543131       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:28:52.543167       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:28:52.549784       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:28:52.550002       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:28:52.569838       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:28:52.569911       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:28:52.569942       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:28:52.582382       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:28:52.592776       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:28:52.593078       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:28:52.593162       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:28:52.593268       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-197523"
	I1101 10:28:52.593752       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:28:52.618555       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:29:37.601403       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [99e565cbd3b72a17fc891167c8a103997c60c46e217825056d511a99adc06362] <==
	I1101 10:28:55.256002       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:28:55.339720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:28:55.442579       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:28:55.442679       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:28:55.442798       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:28:55.463818       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:28:55.463957       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:28:55.471808       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:28:55.472328       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:28:55.472523       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:28:55.474068       1 config.go:200] "Starting service config controller"
	I1101 10:28:55.474118       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:28:55.474159       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:28:55.474186       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:28:55.474221       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:28:55.474247       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:28:55.477062       1 config.go:309] "Starting node config controller"
	I1101 10:28:55.477130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:28:55.477160       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:28:55.574263       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:28:55.574410       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:28:55.574431       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b85d566999f002e5f5e00e625b0180e1a9e7b912446c36d16f147bcb7d75b5f7] <==
	I1101 10:29:51.309868       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:29:52.528481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:29:56.160313       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:29:56.167682       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:29:56.167810       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:29:58.109910       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:29:58.133870       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:29:58.297828       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:29:58.298246       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:29:58.309979       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:29:58.311404       1 config.go:200] "Starting service config controller"
	I1101 10:29:58.325515       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:29:58.325576       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:29:58.325582       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:29:58.325596       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:29:58.325600       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:29:58.362151       1 config.go:309] "Starting node config controller"
	I1101 10:29:58.362230       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:29:58.362261       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:29:58.426833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:29:58.429744       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:29:58.429814       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7149d740a36107a476b99d86dc97bfbc2aa105f71c9a1ca2d72cc7dc8b2a5447] <==
	E1101 10:28:44.640587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:28:44.640701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:28:45.489659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:28:45.498402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:28:45.580442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:28:45.636789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:28:45.683265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:28:45.703840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:28:45.723393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:28:45.750070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:28:45.776323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:28:45.804610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:28:45.829492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:28:45.898114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:28:45.950592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:28:45.964864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:28:45.969512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:28:46.140437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 10:28:48.869760       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:29:40.725788       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:29:40.725822       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:29:40.725844       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:29:40.725882       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:29:40.726142       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:29:40.726159       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c46b8aaeffa0082e965926a54cd85d2e052f19357bd88395e1bc98be5fa281f6] <==
	I1101 10:29:55.201007       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:29:59.016264       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:29:59.016423       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:29:59.023387       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:29:59.023845       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:29:59.023910       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:29:59.023964       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:29:59.030635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:29:59.030727       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:29:59.030783       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:29:59.030816       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:29:59.125792       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:29:59.133830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:29:59.134732       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.236150    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jhdpd\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="79caf352-bf51-4b51-b25b-b7a3daf6cd52" pod="kube-system/kindnet-jhdpd"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.236389    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mwwgw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="728cdaf0-253c-46c6-83e3-5cb2e800e24f" pod="kube-system/kube-proxy-mwwgw"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.236625    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f473bb29a7c49ae0b00b136ba9170d53" pod="kube-system/kube-controller-manager-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.236868    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7a182863f84e1dad627e81cdc2134cb1" pod="kube-system/kube-scheduler-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: I1101 10:29:49.262296    1313 scope.go:117] "RemoveContainer" containerID="b76464b1416c8abe45c0967675f8a27c2908d2e8954a5595fd5cb5ed2329b506"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.262845    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mwwgw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="728cdaf0-253c-46c6-83e3-5cb2e800e24f" pod="kube-system/kube-proxy-mwwgw"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263022    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-svwdl\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbc67d74-e6c7-40ab-a5d7-6677d46431af" pod="kube-system/coredns-66bc5c9577-svwdl"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263173    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f473bb29a7c49ae0b00b136ba9170d53" pod="kube-system/kube-controller-manager-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263312    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7a182863f84e1dad627e81cdc2134cb1" pod="kube-system/kube-scheduler-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263488    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e080f67ff1fb03855bd1c1d221919660" pod="kube-system/etcd-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263625    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-197523\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="882dec79a7d4eb821a4eee699c3f2bb4" pod="kube-system/kube-apiserver-pause-197523"
	Nov 01 10:29:49 pause-197523 kubelet[1313]: E1101 10:29:49.263758    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jhdpd\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="79caf352-bf51-4b51-b25b-b7a3daf6cd52" pod="kube-system/kindnet-jhdpd"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.708380    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-svwdl\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="bbc67d74-e6c7-40ab-a5d7-6677d46431af" pod="kube-system/coredns-66bc5c9577-svwdl"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.708548    1313 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-197523\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.708567    1313 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-197523\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.708775    1313 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-197523\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.746479    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-197523\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="f473bb29a7c49ae0b00b136ba9170d53" pod="kube-system/kube-controller-manager-pause-197523"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.818148    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-197523\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="7a182863f84e1dad627e81cdc2134cb1" pod="kube-system/kube-scheduler-pause-197523"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.956869    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-197523\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="e080f67ff1fb03855bd1c1d221919660" pod="kube-system/etcd-pause-197523"
	Nov 01 10:29:55 pause-197523 kubelet[1313]: E1101 10:29:55.974726    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-197523\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="882dec79a7d4eb821a4eee699c3f2bb4" pod="kube-system/kube-apiserver-pause-197523"
	Nov 01 10:29:56 pause-197523 kubelet[1313]: E1101 10:29:56.000337    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-jhdpd\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="79caf352-bf51-4b51-b25b-b7a3daf6cd52" pod="kube-system/kindnet-jhdpd"
	Nov 01 10:29:56 pause-197523 kubelet[1313]: E1101 10:29:56.047135    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-mwwgw\" is forbidden: User \"system:node:pause-197523\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-197523' and this object" podUID="728cdaf0-253c-46c6-83e3-5cb2e800e24f" pod="kube-system/kube-proxy-mwwgw"
	Nov 01 10:30:07 pause-197523 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:30:07 pause-197523 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:30:07 pause-197523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-197523 -n pause-197523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-197523 -n pause-197523: exit status 2 (521.142686ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-197523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-180313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-180313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (266.90791ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:33:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-180313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-180313 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-180313 describe deploy/metrics-server -n kube-system: exit status 1 (101.402819ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-180313 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-180313
helpers_test.go:243: (dbg) docker inspect old-k8s-version-180313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1",
	        "Created": "2025-11-01T10:31:56.175953746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 456654,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:31:56.267436865Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/hosts",
	        "LogPath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1-json.log",
	        "Name": "/old-k8s-version-180313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-180313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-180313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1",
	                "LowerDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-180313",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-180313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-180313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-180313",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-180313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc131b7f1e0d39159dc2cb2b6f60e2fb5b9929164a056b6ec508fbd5687c8b63",
	            "SandboxKey": "/var/run/docker/netns/dc131b7f1e0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-180313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:bb:96:6b:5b:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "166ca61202b04ec7e10cf51d0a2cefb4328ec9285bf6b5c3a38e12ab732f4c8c",
	                    "EndpointID": "fe7832d05e1aa5d1b7a1189c0e1f97577e2e5ac80f9e8c276a65b4821941f259",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-180313",
	                        "d94f4283ef92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180313 -n old-k8s-version-180313
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-180313 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-180313 logs -n 25: (1.178495614s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-220636 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo docker system info                                                                                                                                                                                                      │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo containerd config dump                                                                                                                                                                                                  │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo crio config                                                                                                                                                                                                             │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ delete  │ -p cilium-220636                                                                                                                                                                                                                              │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:30 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p force-systemd-env-065424                                                                                                                                                                                                                   │ force-systemd-env-065424 │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p cert-options-082900 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ ssh     │ cert-options-082900 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ ssh     │ -p cert-options-082900 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p cert-options-082900                                                                                                                                                                                                                        │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:31:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:31:49.746012  456195 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:31:49.746248  456195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:31:49.746278  456195 out.go:374] Setting ErrFile to fd 2...
	I1101 10:31:49.746297  456195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:31:49.746624  456195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:31:49.747177  456195 out.go:368] Setting JSON to false
	I1101 10:31:49.748187  456195 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8059,"bootTime":1761985051,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:31:49.748288  456195 start.go:143] virtualization:  
	I1101 10:31:49.752343  456195 out.go:179] * [old-k8s-version-180313] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:31:49.757356  456195 notify.go:221] Checking for updates...
	I1101 10:31:49.758081  456195 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:31:49.761660  456195 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:31:49.765241  456195 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:31:49.768826  456195 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:31:49.772125  456195 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:31:49.776088  456195 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:31:49.779854  456195 config.go:182] Loaded profile config "cert-expiration-459318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:31:49.780034  456195 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:31:49.825832  456195 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:31:49.825979  456195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:31:49.906574  456195 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:31:49.897002802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:31:49.906823  456195 docker.go:319] overlay module found
	I1101 10:31:49.910172  456195 out.go:179] * Using the docker driver based on user configuration
	I1101 10:31:49.913172  456195 start.go:309] selected driver: docker
	I1101 10:31:49.913197  456195 start.go:930] validating driver "docker" against <nil>
	I1101 10:31:49.913212  456195 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:31:49.914032  456195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:31:49.967900  456195 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:31:49.958538708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:31:49.968189  456195 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:31:49.968430  456195 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:31:49.971615  456195 out.go:179] * Using Docker driver with root privileges
	I1101 10:31:49.974568  456195 cni.go:84] Creating CNI manager for ""
	I1101 10:31:49.974632  456195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:31:49.974646  456195 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:31:49.974726  456195 start.go:353] cluster config:
	{Name:old-k8s-version-180313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:31:49.977839  456195 out.go:179] * Starting "old-k8s-version-180313" primary control-plane node in "old-k8s-version-180313" cluster
	I1101 10:31:49.980640  456195 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:31:49.983634  456195 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:31:49.986466  456195 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:31:49.986526  456195 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 10:31:49.986538  456195 cache.go:59] Caching tarball of preloaded images
	I1101 10:31:49.986559  456195 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:31:49.986639  456195 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:31:49.986649  456195 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 10:31:49.986754  456195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/config.json ...
	I1101 10:31:49.986773  456195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/config.json: {Name:mkcd25f0ccff09773b9b035600687d327ac9a676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:31:50.012909  456195 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:31:50.012936  456195 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:31:50.012950  456195 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:31:50.012976  456195 start.go:360] acquireMachinesLock for old-k8s-version-180313: {Name:mk3ec5b6146cc37d4ff7cd4fad3c6dc99a1fadd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:31:50.013132  456195 start.go:364] duration metric: took 136.929µs to acquireMachinesLock for "old-k8s-version-180313"
	I1101 10:31:50.013164  456195 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-180313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180313 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:31:50.013246  456195 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:31:50.017206  456195 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:31:50.017495  456195 start.go:159] libmachine.API.Create for "old-k8s-version-180313" (driver="docker")
	I1101 10:31:50.017546  456195 client.go:173] LocalClient.Create starting
	I1101 10:31:50.017632  456195 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem
	I1101 10:31:50.017681  456195 main.go:143] libmachine: Decoding PEM data...
	I1101 10:31:50.017737  456195 main.go:143] libmachine: Parsing certificate...
	I1101 10:31:50.017811  456195 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem
	I1101 10:31:50.017842  456195 main.go:143] libmachine: Decoding PEM data...
	I1101 10:31:50.017854  456195 main.go:143] libmachine: Parsing certificate...
	I1101 10:31:50.018300  456195 cli_runner.go:164] Run: docker network inspect old-k8s-version-180313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:31:50.038836  456195 cli_runner.go:211] docker network inspect old-k8s-version-180313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:31:50.038946  456195 network_create.go:284] running [docker network inspect old-k8s-version-180313] to gather additional debugging logs...
	I1101 10:31:50.038978  456195 cli_runner.go:164] Run: docker network inspect old-k8s-version-180313
	W1101 10:31:50.055519  456195 cli_runner.go:211] docker network inspect old-k8s-version-180313 returned with exit code 1
	I1101 10:31:50.055561  456195 network_create.go:287] error running [docker network inspect old-k8s-version-180313]: docker network inspect old-k8s-version-180313: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-180313 not found
	I1101 10:31:50.055576  456195 network_create.go:289] output of [docker network inspect old-k8s-version-180313]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-180313 not found
	
	** /stderr **
	I1101 10:31:50.055695  456195 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:31:50.074085  456195 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b4026c1b0063 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ce:bd:30:c3:d1} reservation:<nil>}
	I1101 10:31:50.074480  456195 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e394bead07b9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:98:c6:36:ba:b7} reservation:<nil>}
	I1101 10:31:50.074763  456195 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bd8719a80444 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:75:48:52:a5:ee} reservation:<nil>}
	I1101 10:31:50.075242  456195 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a227e0}
	I1101 10:31:50.075273  456195 network_create.go:124] attempt to create docker network old-k8s-version-180313 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 10:31:50.075334  456195 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-180313 old-k8s-version-180313
	I1101 10:31:50.140529  456195 network_create.go:108] docker network old-k8s-version-180313 192.168.76.0/24 created
	I1101 10:31:50.140566  456195 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-180313" container
	I1101 10:31:50.140643  456195 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:31:50.158279  456195 cli_runner.go:164] Run: docker volume create old-k8s-version-180313 --label name.minikube.sigs.k8s.io=old-k8s-version-180313 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:31:50.178150  456195 oci.go:103] Successfully created a docker volume old-k8s-version-180313
	I1101 10:31:50.178246  456195 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-180313-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-180313 --entrypoint /usr/bin/test -v old-k8s-version-180313:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:31:50.758751  456195 oci.go:107] Successfully prepared a docker volume old-k8s-version-180313
	I1101 10:31:50.758810  456195 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:31:50.758832  456195 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:31:50.758904  456195 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-180313:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:31:56.087380  456195 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-180313:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.328427254s)
	I1101 10:31:56.087458  456195 kic.go:203] duration metric: took 5.328613307s to extract preloaded images to volume ...
	W1101 10:31:56.087650  456195 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:31:56.087761  456195 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:31:56.160445  456195 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-180313 --name old-k8s-version-180313 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-180313 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-180313 --network old-k8s-version-180313 --ip 192.168.76.2 --volume old-k8s-version-180313:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:31:56.490290  456195 cli_runner.go:164] Run: docker container inspect old-k8s-version-180313 --format={{.State.Running}}
	I1101 10:31:56.510409  456195 cli_runner.go:164] Run: docker container inspect old-k8s-version-180313 --format={{.State.Status}}
	I1101 10:31:56.530078  456195 cli_runner.go:164] Run: docker exec old-k8s-version-180313 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:31:56.589046  456195 oci.go:144] the created container "old-k8s-version-180313" has a running status.
	I1101 10:31:56.589088  456195 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/old-k8s-version-180313/id_rsa...
	I1101 10:31:56.689236  456195 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/old-k8s-version-180313/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:31:56.714341  456195 cli_runner.go:164] Run: docker container inspect old-k8s-version-180313 --format={{.State.Status}}
	I1101 10:31:56.740061  456195 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:31:56.740085  456195 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-180313 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:31:56.799650  456195 cli_runner.go:164] Run: docker container inspect old-k8s-version-180313 --format={{.State.Status}}
	I1101 10:31:56.826663  456195 machine.go:94] provisionDockerMachine start ...
	I1101 10:31:56.826771  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:31:56.853752  456195 main.go:143] libmachine: Using SSH client type: native
	I1101 10:31:56.854321  456195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1101 10:31:56.854339  456195 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:31:56.861826  456195 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:32:00.024093  456195 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-180313
	
	I1101 10:32:00.024121  456195 ubuntu.go:182] provisioning hostname "old-k8s-version-180313"
	I1101 10:32:00.024197  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:32:00.086461  456195 main.go:143] libmachine: Using SSH client type: native
	I1101 10:32:00.086800  456195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1101 10:32:00.086813  456195 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-180313 && echo "old-k8s-version-180313" | sudo tee /etc/hostname
	I1101 10:32:00.440115  456195 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-180313
	
	I1101 10:32:00.440316  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:32:00.468626  456195 main.go:143] libmachine: Using SSH client type: native
	I1101 10:32:00.468948  456195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1101 10:32:00.468966  456195 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-180313' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-180313/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-180313' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:32:00.639665  456195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:32:00.639739  456195 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:32:00.639768  456195 ubuntu.go:190] setting up certificates
	I1101 10:32:00.639779  456195 provision.go:84] configureAuth start
	I1101 10:32:00.639872  456195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-180313
	I1101 10:32:00.659070  456195 provision.go:143] copyHostCerts
	I1101 10:32:00.659157  456195 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:32:00.659171  456195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:32:00.659255  456195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:32:00.659426  456195 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:32:00.659440  456195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:32:00.659477  456195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:32:00.659539  456195 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:32:00.659549  456195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:32:00.659576  456195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:32:00.659638  456195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-180313 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-180313]
	I1101 10:32:00.755313  456195 provision.go:177] copyRemoteCerts
	I1101 10:32:00.755407  456195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:32:00.755450  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:32:00.773469  456195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/old-k8s-version-180313/id_rsa Username:docker}
	I1101 10:32:00.878201  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:32:00.898988  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:32:00.919267  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:32:00.940434  456195 provision.go:87] duration metric: took 300.631076ms to configureAuth
	I1101 10:32:00.940505  456195 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:32:00.940737  456195 config.go:182] Loaded profile config "old-k8s-version-180313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:32:00.940854  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:32:00.959518  456195 main.go:143] libmachine: Using SSH client type: native
	I1101 10:32:00.959831  456195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1101 10:32:00.959850  456195 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:32:01.225066  456195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:32:01.225100  456195 machine.go:97] duration metric: took 4.398416036s to provisionDockerMachine
	I1101 10:32:01.225110  456195 client.go:176] duration metric: took 11.207554115s to LocalClient.Create
	I1101 10:32:01.225124  456195 start.go:167] duration metric: took 11.207632836s to libmachine.API.Create "old-k8s-version-180313"
	I1101 10:32:01.225132  456195 start.go:293] postStartSetup for "old-k8s-version-180313" (driver="docker")
	I1101 10:32:01.225184  456195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:32:01.225271  456195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:32:01.225333  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:32:01.243692  456195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/old-k8s-version-180313/id_rsa Username:docker}
	I1101 10:32:01.352792  456195 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:32:01.356624  456195 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:32:01.356657  456195 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:32:01.356669  456195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:32:01.356724  456195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:32:01.356814  456195 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:32:01.356925  456195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:32:01.367337  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:32:01.393114  456195 start.go:296] duration metric: took 167.905057ms for postStartSetup
	I1101 10:32:01.393504  456195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-180313
	I1101 10:32:01.413934  456195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/config.json ...
	I1101 10:32:01.414222  456195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:32:01.414264  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:32:01.431350  456195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/old-k8s-version-180313/id_rsa Username:docker}
	I1101 10:32:01.535156  456195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:32:01.540210  456195 start.go:128] duration metric: took 11.526945321s to createHost
	I1101 10:32:01.540236  456195 start.go:83] releasing machines lock for "old-k8s-version-180313", held for 11.527093294s
	I1101 10:32:01.540321  456195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-180313
	I1101 10:32:01.556429  456195 ssh_runner.go:195] Run: cat /version.json
	I1101 10:32:01.556478  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:32:01.556479  456195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:32:01.556542  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:32:01.574443  456195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/old-k8s-version-180313/id_rsa Username:docker}
	I1101 10:32:01.590658  456195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/old-k8s-version-180313/id_rsa Username:docker}
	I1101 10:32:01.681516  456195 ssh_runner.go:195] Run: systemctl --version
	I1101 10:32:01.772953  456195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:32:01.809737  456195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:32:01.814239  456195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:32:01.814358  456195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:32:01.847229  456195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:32:01.847248  456195 start.go:496] detecting cgroup driver to use...
	I1101 10:32:01.847278  456195 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:32:01.847325  456195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:32:01.868674  456195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:32:01.882560  456195 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:32:01.882622  456195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:32:01.900984  456195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:32:01.922012  456195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:32:02.049916  456195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:32:02.177233  456195 docker.go:234] disabling docker service ...
	I1101 10:32:02.177368  456195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:32:02.201217  456195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:32:02.215184  456195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:32:02.333825  456195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:32:02.451922  456195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:32:02.465575  456195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:32:02.479923  456195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:32:02.480047  456195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:32:02.488944  456195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:32:02.489074  456195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:32:02.498178  456195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:32:02.507292  456195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:32:02.516993  456195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:32:02.525337  456195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:32:02.534505  456195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:32:02.548904  456195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:32:02.558212  456195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:32:02.565884  456195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:32:02.574362  456195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:32:02.695728  456195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:32:02.853466  456195 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:32:02.853574  456195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:32:02.861247  456195 start.go:564] Will wait 60s for crictl version
	I1101 10:32:02.861331  456195 ssh_runner.go:195] Run: which crictl
	I1101 10:32:02.865508  456195 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:32:02.890074  456195 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:32:02.890222  456195 ssh_runner.go:195] Run: crio --version
	I1101 10:32:02.919504  456195 ssh_runner.go:195] Run: crio --version
	I1101 10:32:02.954847  456195 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 10:32:02.957623  456195 cli_runner.go:164] Run: docker network inspect old-k8s-version-180313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:32:02.976803  456195 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:32:02.980985  456195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:32:02.993185  456195 kubeadm.go:884] updating cluster {Name:old-k8s-version-180313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180313 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:32:02.993301  456195 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:32:02.993381  456195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:32:03.042387  456195 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:32:03.042412  456195 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:32:03.042490  456195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:32:03.075034  456195 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:32:03.075061  456195 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:32:03.075069  456195 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1101 10:32:03.075221  456195 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-180313 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:32:03.075328  456195 ssh_runner.go:195] Run: crio config
	I1101 10:32:03.135881  456195 cni.go:84] Creating CNI manager for ""
	I1101 10:32:03.135907  456195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:32:03.135923  456195 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:32:03.135946  456195 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-180313 NodeName:old-k8s-version-180313 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:32:03.136083  456195 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-180313"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:32:03.136156  456195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:32:03.144470  456195 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:32:03.144561  456195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:32:03.152611  456195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:32:03.166991  456195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:32:03.180452  456195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1101 10:32:03.194466  456195 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:32:03.199064  456195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:32:03.208907  456195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:32:03.332074  456195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:32:03.350736  456195 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313 for IP: 192.168.76.2
	I1101 10:32:03.350755  456195 certs.go:195] generating shared ca certs ...
	I1101 10:32:03.350775  456195 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:32:03.350917  456195 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:32:03.350970  456195 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:32:03.350981  456195 certs.go:257] generating profile certs ...
	I1101 10:32:03.351038  456195 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.key
	I1101 10:32:03.351055  456195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt with IP's: []
	I1101 10:32:04.074286  456195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt ...
	I1101 10:32:04.074322  456195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt: {Name:mka81eb82dc3c0880ba64eb081a70b0ee7ce49c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:32:04.074532  456195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.key ...
	I1101 10:32:04.074551  456195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.key: {Name:mkb71bd45afca35fd5808dab4c6ae2344144a425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:32:04.074656  456195 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.key.1cd8d7c8
	I1101 10:32:04.074677  456195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.crt.1cd8d7c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:32:04.194030  456195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.crt.1cd8d7c8 ...
	I1101 10:32:04.194067  456195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.crt.1cd8d7c8: {Name:mk624afea3931fbf582cc6495e5fad798dce0861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:32:04.194269  456195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.key.1cd8d7c8 ...
	I1101 10:32:04.194285  456195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.key.1cd8d7c8: {Name:mke2ac79194b69f0e2bcfaabe8a64a89cfed4b88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:32:04.194378  456195 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.crt.1cd8d7c8 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.crt
	I1101 10:32:04.194467  456195 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.key.1cd8d7c8 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.key
	I1101 10:32:04.194531  456195 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/proxy-client.key
	I1101 10:32:04.194549  456195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/proxy-client.crt with IP's: []
	I1101 10:32:05.348578  456195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/proxy-client.crt ...
	I1101 10:32:05.348612  456195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/proxy-client.crt: {Name:mk83b44b442fa5cbae6a5e2e9497204ce254ca5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:32:05.348817  456195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/proxy-client.key ...
	I1101 10:32:05.348832  456195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/proxy-client.key: {Name:mk898605adf226b4f0934163c9e47e8df2019ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:32:05.349037  456195 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:32:05.349095  456195 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:32:05.349109  456195 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:32:05.349135  456195 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:32:05.349159  456195 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:32:05.349180  456195 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:32:05.349231  456195 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:32:05.349937  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:32:05.370271  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:32:05.391797  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:32:05.410888  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:32:05.429388  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:32:05.448206  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:32:05.466817  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:32:05.485426  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:32:05.503704  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:32:05.524826  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:32:05.544126  456195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:32:05.562944  456195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:32:05.576708  456195 ssh_runner.go:195] Run: openssl version
	I1101 10:32:05.583324  456195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:32:05.592924  456195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:32:05.596935  456195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:32:05.597008  456195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:32:05.640347  456195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:32:05.651024  456195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:32:05.662569  456195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:32:05.667147  456195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:32:05.667232  456195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:32:05.709982  456195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:32:05.720802  456195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:32:05.732874  456195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:32:05.736956  456195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:32:05.737030  456195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:32:05.778493  456195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:32:05.787146  456195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:32:05.790780  456195 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:32:05.790827  456195 kubeadm.go:401] StartCluster: {Name:old-k8s-version-180313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180313 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:32:05.790906  456195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:32:05.790979  456195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:32:05.822490  456195 cri.go:89] found id: ""
	I1101 10:32:05.822591  456195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:32:05.830576  456195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:32:05.838338  456195 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:32:05.838423  456195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:32:05.847493  456195 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:32:05.847515  456195 kubeadm.go:158] found existing configuration files:
	
	I1101 10:32:05.847573  456195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:32:05.855529  456195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:32:05.855600  456195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:32:05.869340  456195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:32:05.877309  456195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:32:05.877380  456195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:32:05.885067  456195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:32:05.893454  456195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:32:05.893529  456195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:32:05.900956  456195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:32:05.909075  456195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:32:05.909176  456195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:32:05.916690  456195 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:32:05.968267  456195 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1101 10:32:05.968340  456195 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:32:06.012631  456195 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:32:06.012711  456195 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:32:06.012755  456195 kubeadm.go:319] OS: Linux
	I1101 10:32:06.012822  456195 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:32:06.012877  456195 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:32:06.012931  456195 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:32:06.012993  456195 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:32:06.013058  456195 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:32:06.013136  456195 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:32:06.013189  456195 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:32:06.013241  456195 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:32:06.013294  456195 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:32:06.097367  456195 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:32:06.097570  456195 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:32:06.097738  456195 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 10:32:06.241014  456195 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:32:06.246983  456195 out.go:252]   - Generating certificates and keys ...
	I1101 10:32:06.247144  456195 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:32:06.247249  456195 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:32:06.958292  456195 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:32:07.753501  456195 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:32:08.337154  456195 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:32:08.571663  456195 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:32:08.933012  456195 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:32:08.933207  456195 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-180313] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:32:09.141897  456195 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:32:09.142085  456195 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-180313] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:32:09.769842  456195 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:32:10.773955  456195 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:32:11.342433  456195 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:32:11.343027  456195 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:32:12.176515  456195 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:32:12.506447  456195 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:32:12.813869  456195 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:32:13.155618  456195 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:32:13.155722  456195 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:32:13.155793  456195 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:32:13.159582  456195 out.go:252]   - Booting up control plane ...
	I1101 10:32:13.159691  456195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:32:13.159775  456195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:32:13.159846  456195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:32:13.182150  456195 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:32:13.183064  456195 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:32:13.183121  456195 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:32:13.316703  456195 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 10:32:20.820783  456195 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.502674 seconds
	I1101 10:32:20.820911  456195 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:32:20.850690  456195 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:32:21.381814  456195 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:32:21.382034  456195 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-180313 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:32:21.894661  456195 kubeadm.go:319] [bootstrap-token] Using token: sbo429.lxz16slxo2xcm4v1
	I1101 10:32:21.897571  456195 out.go:252]   - Configuring RBAC rules ...
	I1101 10:32:21.897782  456195 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:32:21.904043  456195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:32:21.915601  456195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:32:21.920409  456195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:32:21.927012  456195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:32:21.931333  456195 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:32:21.947723  456195 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:32:22.242522  456195 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:32:22.331515  456195 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:32:22.332789  456195 kubeadm.go:319] 
	I1101 10:32:22.332863  456195 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:32:22.332869  456195 kubeadm.go:319] 
	I1101 10:32:22.332964  456195 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:32:22.332970  456195 kubeadm.go:319] 
	I1101 10:32:22.332996  456195 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:32:22.333057  456195 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:32:22.333119  456195 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:32:22.333124  456195 kubeadm.go:319] 
	I1101 10:32:22.333180  456195 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:32:22.333185  456195 kubeadm.go:319] 
	I1101 10:32:22.333235  456195 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:32:22.333240  456195 kubeadm.go:319] 
	I1101 10:32:22.333294  456195 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:32:22.333377  456195 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:32:22.333450  456195 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:32:22.333454  456195 kubeadm.go:319] 
	I1101 10:32:22.333542  456195 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:32:22.333622  456195 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:32:22.333626  456195 kubeadm.go:319] 
	I1101 10:32:22.333729  456195 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sbo429.lxz16slxo2xcm4v1 \
	I1101 10:32:22.333838  456195 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 10:32:22.333859  456195 kubeadm.go:319] 	--control-plane 
	I1101 10:32:22.333864  456195 kubeadm.go:319] 
	I1101 10:32:22.333953  456195 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:32:22.333957  456195 kubeadm.go:319] 
	I1101 10:32:22.334042  456195 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sbo429.lxz16slxo2xcm4v1 \
	I1101 10:32:22.334148  456195 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 10:32:22.337574  456195 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:32:22.337706  456195 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:32:22.337729  456195 cni.go:84] Creating CNI manager for ""
	I1101 10:32:22.337737  456195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:32:22.341066  456195 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:32:22.344120  456195 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:32:22.360383  456195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1101 10:32:22.360401  456195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:32:22.389281  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:32:23.331117  456195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:32:23.331208  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:23.331256  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-180313 minikube.k8s.io/updated_at=2025_11_01T10_32_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=old-k8s-version-180313 minikube.k8s.io/primary=true
	I1101 10:32:23.472120  456195 ops.go:34] apiserver oom_adj: -16
	I1101 10:32:23.472237  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:23.972319  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:24.472596  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:24.973098  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:25.472767  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:25.973239  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:26.472905  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:26.972938  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:27.473307  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:27.972320  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:28.472369  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:28.972762  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:29.472359  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:29.972822  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:30.472962  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:30.972980  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:31.472836  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:31.972984  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:32.472302  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:32.973113  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:33.472529  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:33.972380  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:34.473180  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:34.972737  456195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:32:35.164321  456195 kubeadm.go:1114] duration metric: took 11.833186046s to wait for elevateKubeSystemPrivileges
	I1101 10:32:35.164349  456195 kubeadm.go:403] duration metric: took 29.373525118s to StartCluster
	I1101 10:32:35.164368  456195 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:32:35.164430  456195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:32:35.165464  456195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:32:35.165862  456195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:32:35.166031  456195 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:32:35.166366  456195 config.go:182] Loaded profile config "old-k8s-version-180313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:32:35.166412  456195 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:32:35.166486  456195 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-180313"
	I1101 10:32:35.166501  456195 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-180313"
	I1101 10:32:35.166538  456195 host.go:66] Checking if "old-k8s-version-180313" exists ...
	I1101 10:32:35.167340  456195 cli_runner.go:164] Run: docker container inspect old-k8s-version-180313 --format={{.State.Status}}
	I1101 10:32:35.167505  456195 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-180313"
	I1101 10:32:35.167521  456195 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-180313"
	I1101 10:32:35.167784  456195 cli_runner.go:164] Run: docker container inspect old-k8s-version-180313 --format={{.State.Status}}
	I1101 10:32:35.171818  456195 out.go:179] * Verifying Kubernetes components...
	I1101 10:32:35.175482  456195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:32:35.210331  456195 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:32:35.213405  456195 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:32:35.213429  456195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:32:35.213510  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:32:35.218415  456195 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-180313"
	I1101 10:32:35.218553  456195 host.go:66] Checking if "old-k8s-version-180313" exists ...
	I1101 10:32:35.219005  456195 cli_runner.go:164] Run: docker container inspect old-k8s-version-180313 --format={{.State.Status}}
	I1101 10:32:35.254190  456195 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:32:35.254214  456195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:32:35.254282  456195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:32:35.268439  456195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/old-k8s-version-180313/id_rsa Username:docker}
	I1101 10:32:35.294236  456195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/old-k8s-version-180313/id_rsa Username:docker}
	I1101 10:32:35.536269  456195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:32:35.591649  456195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:32:35.591897  456195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:32:35.607629  456195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:32:36.711588  456195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.175230976s)
	I1101 10:32:36.941595  456195 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.349635441s)
	I1101 10:32:36.942724  456195 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.350992226s)
	I1101 10:32:36.942804  456195 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 10:32:36.943170  456195 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-180313" to be "Ready" ...
	I1101 10:32:37.340108  456195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.732388263s)
	I1101 10:32:37.344148  456195 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 10:32:37.347091  456195 addons.go:515] duration metric: took 2.180658666s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 10:32:37.448490  456195 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-180313" context rescaled to 1 replicas
	W1101 10:32:38.947095  456195 node_ready.go:57] node "old-k8s-version-180313" has "Ready":"False" status (will retry)
	W1101 10:32:41.447256  456195 node_ready.go:57] node "old-k8s-version-180313" has "Ready":"False" status (will retry)
	W1101 10:32:43.947067  456195 node_ready.go:57] node "old-k8s-version-180313" has "Ready":"False" status (will retry)
	W1101 10:32:46.447102  456195 node_ready.go:57] node "old-k8s-version-180313" has "Ready":"False" status (will retry)
	W1101 10:32:48.946529  456195 node_ready.go:57] node "old-k8s-version-180313" has "Ready":"False" status (will retry)
	I1101 10:32:49.446825  456195 node_ready.go:49] node "old-k8s-version-180313" is "Ready"
	I1101 10:32:49.446857  456195 node_ready.go:38] duration metric: took 12.50352519s for node "old-k8s-version-180313" to be "Ready" ...
	I1101 10:32:49.446871  456195 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:32:49.446927  456195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:32:49.459840  456195 api_server.go:72] duration metric: took 14.293777659s to wait for apiserver process to appear ...
	I1101 10:32:49.459866  456195 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:32:49.459886  456195 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:32:49.470098  456195 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:32:49.471557  456195 api_server.go:141] control plane version: v1.28.0
	I1101 10:32:49.471585  456195 api_server.go:131] duration metric: took 11.711388ms to wait for apiserver health ...
	I1101 10:32:49.471595  456195 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:32:49.475295  456195 system_pods.go:59] 8 kube-system pods found
	I1101 10:32:49.475331  456195 system_pods.go:61] "coredns-5dd5756b68-ltprk" [6f8135b4-6f5e-47af-bfd2-3f22846c0482] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:32:49.475338  456195 system_pods.go:61] "etcd-old-k8s-version-180313" [5c48e3c1-14e7-4eb4-b7d7-cd22320e48b8] Running
	I1101 10:32:49.475344  456195 system_pods.go:61] "kindnet-2qdl9" [fb4eda13-162f-45ae-bbf9-7e8838c8cec6] Running
	I1101 10:32:49.475348  456195 system_pods.go:61] "kube-apiserver-old-k8s-version-180313" [f29b17b6-f984-47a2-ab6b-1c4d8746abc6] Running
	I1101 10:32:49.475354  456195 system_pods.go:61] "kube-controller-manager-old-k8s-version-180313" [7caa1e73-59f3-42d1-9ea3-5a4735ea940a] Running
	I1101 10:32:49.475358  456195 system_pods.go:61] "kube-proxy-ltbrb" [d1582ecb-2805-4eb7-9adb-8f3834e8ef13] Running
	I1101 10:32:49.475363  456195 system_pods.go:61] "kube-scheduler-old-k8s-version-180313" [b16813b2-809a-44d6-9867-10cb72d22182] Running
	I1101 10:32:49.475377  456195 system_pods.go:61] "storage-provisioner" [3139bac6-9550-4f9f-8077-b4f35da28974] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:32:49.475384  456195 system_pods.go:74] duration metric: took 3.783143ms to wait for pod list to return data ...
	I1101 10:32:49.475398  456195 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:32:49.477888  456195 default_sa.go:45] found service account: "default"
	I1101 10:32:49.477912  456195 default_sa.go:55] duration metric: took 2.506673ms for default service account to be created ...
	I1101 10:32:49.477922  456195 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:32:49.481971  456195 system_pods.go:86] 8 kube-system pods found
	I1101 10:32:49.482002  456195 system_pods.go:89] "coredns-5dd5756b68-ltprk" [6f8135b4-6f5e-47af-bfd2-3f22846c0482] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:32:49.482009  456195 system_pods.go:89] "etcd-old-k8s-version-180313" [5c48e3c1-14e7-4eb4-b7d7-cd22320e48b8] Running
	I1101 10:32:49.482015  456195 system_pods.go:89] "kindnet-2qdl9" [fb4eda13-162f-45ae-bbf9-7e8838c8cec6] Running
	I1101 10:32:49.482020  456195 system_pods.go:89] "kube-apiserver-old-k8s-version-180313" [f29b17b6-f984-47a2-ab6b-1c4d8746abc6] Running
	I1101 10:32:49.482025  456195 system_pods.go:89] "kube-controller-manager-old-k8s-version-180313" [7caa1e73-59f3-42d1-9ea3-5a4735ea940a] Running
	I1101 10:32:49.482029  456195 system_pods.go:89] "kube-proxy-ltbrb" [d1582ecb-2805-4eb7-9adb-8f3834e8ef13] Running
	I1101 10:32:49.482034  456195 system_pods.go:89] "kube-scheduler-old-k8s-version-180313" [b16813b2-809a-44d6-9867-10cb72d22182] Running
	I1101 10:32:49.482039  456195 system_pods.go:89] "storage-provisioner" [3139bac6-9550-4f9f-8077-b4f35da28974] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:32:49.482070  456195 retry.go:31] will retry after 289.454005ms: missing components: kube-dns
	I1101 10:32:49.776451  456195 system_pods.go:86] 8 kube-system pods found
	I1101 10:32:49.776487  456195 system_pods.go:89] "coredns-5dd5756b68-ltprk" [6f8135b4-6f5e-47af-bfd2-3f22846c0482] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:32:49.776497  456195 system_pods.go:89] "etcd-old-k8s-version-180313" [5c48e3c1-14e7-4eb4-b7d7-cd22320e48b8] Running
	I1101 10:32:49.776504  456195 system_pods.go:89] "kindnet-2qdl9" [fb4eda13-162f-45ae-bbf9-7e8838c8cec6] Running
	I1101 10:32:49.776509  456195 system_pods.go:89] "kube-apiserver-old-k8s-version-180313" [f29b17b6-f984-47a2-ab6b-1c4d8746abc6] Running
	I1101 10:32:49.776514  456195 system_pods.go:89] "kube-controller-manager-old-k8s-version-180313" [7caa1e73-59f3-42d1-9ea3-5a4735ea940a] Running
	I1101 10:32:49.776518  456195 system_pods.go:89] "kube-proxy-ltbrb" [d1582ecb-2805-4eb7-9adb-8f3834e8ef13] Running
	I1101 10:32:49.776522  456195 system_pods.go:89] "kube-scheduler-old-k8s-version-180313" [b16813b2-809a-44d6-9867-10cb72d22182] Running
	I1101 10:32:49.776532  456195 system_pods.go:89] "storage-provisioner" [3139bac6-9550-4f9f-8077-b4f35da28974] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:32:49.776549  456195 retry.go:31] will retry after 256.769247ms: missing components: kube-dns
	I1101 10:32:50.040318  456195 system_pods.go:86] 8 kube-system pods found
	I1101 10:32:50.040358  456195 system_pods.go:89] "coredns-5dd5756b68-ltprk" [6f8135b4-6f5e-47af-bfd2-3f22846c0482] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:32:50.040365  456195 system_pods.go:89] "etcd-old-k8s-version-180313" [5c48e3c1-14e7-4eb4-b7d7-cd22320e48b8] Running
	I1101 10:32:50.040372  456195 system_pods.go:89] "kindnet-2qdl9" [fb4eda13-162f-45ae-bbf9-7e8838c8cec6] Running
	I1101 10:32:50.040376  456195 system_pods.go:89] "kube-apiserver-old-k8s-version-180313" [f29b17b6-f984-47a2-ab6b-1c4d8746abc6] Running
	I1101 10:32:50.040381  456195 system_pods.go:89] "kube-controller-manager-old-k8s-version-180313" [7caa1e73-59f3-42d1-9ea3-5a4735ea940a] Running
	I1101 10:32:50.040426  456195 system_pods.go:89] "kube-proxy-ltbrb" [d1582ecb-2805-4eb7-9adb-8f3834e8ef13] Running
	I1101 10:32:50.040440  456195 system_pods.go:89] "kube-scheduler-old-k8s-version-180313" [b16813b2-809a-44d6-9867-10cb72d22182] Running
	I1101 10:32:50.040455  456195 system_pods.go:89] "storage-provisioner" [3139bac6-9550-4f9f-8077-b4f35da28974] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:32:50.040494  456195 retry.go:31] will retry after 436.008545ms: missing components: kube-dns
	I1101 10:32:50.480565  456195 system_pods.go:86] 8 kube-system pods found
	I1101 10:32:50.480599  456195 system_pods.go:89] "coredns-5dd5756b68-ltprk" [6f8135b4-6f5e-47af-bfd2-3f22846c0482] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:32:50.480606  456195 system_pods.go:89] "etcd-old-k8s-version-180313" [5c48e3c1-14e7-4eb4-b7d7-cd22320e48b8] Running
	I1101 10:32:50.480612  456195 system_pods.go:89] "kindnet-2qdl9" [fb4eda13-162f-45ae-bbf9-7e8838c8cec6] Running
	I1101 10:32:50.480616  456195 system_pods.go:89] "kube-apiserver-old-k8s-version-180313" [f29b17b6-f984-47a2-ab6b-1c4d8746abc6] Running
	I1101 10:32:50.480625  456195 system_pods.go:89] "kube-controller-manager-old-k8s-version-180313" [7caa1e73-59f3-42d1-9ea3-5a4735ea940a] Running
	I1101 10:32:50.480629  456195 system_pods.go:89] "kube-proxy-ltbrb" [d1582ecb-2805-4eb7-9adb-8f3834e8ef13] Running
	I1101 10:32:50.480634  456195 system_pods.go:89] "kube-scheduler-old-k8s-version-180313" [b16813b2-809a-44d6-9867-10cb72d22182] Running
	I1101 10:32:50.480640  456195 system_pods.go:89] "storage-provisioner" [3139bac6-9550-4f9f-8077-b4f35da28974] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:32:50.480661  456195 retry.go:31] will retry after 598.650575ms: missing components: kube-dns
	I1101 10:32:51.084079  456195 system_pods.go:86] 8 kube-system pods found
	I1101 10:32:51.084114  456195 system_pods.go:89] "coredns-5dd5756b68-ltprk" [6f8135b4-6f5e-47af-bfd2-3f22846c0482] Running
	I1101 10:32:51.084121  456195 system_pods.go:89] "etcd-old-k8s-version-180313" [5c48e3c1-14e7-4eb4-b7d7-cd22320e48b8] Running
	I1101 10:32:51.084126  456195 system_pods.go:89] "kindnet-2qdl9" [fb4eda13-162f-45ae-bbf9-7e8838c8cec6] Running
	I1101 10:32:51.084131  456195 system_pods.go:89] "kube-apiserver-old-k8s-version-180313" [f29b17b6-f984-47a2-ab6b-1c4d8746abc6] Running
	I1101 10:32:51.084135  456195 system_pods.go:89] "kube-controller-manager-old-k8s-version-180313" [7caa1e73-59f3-42d1-9ea3-5a4735ea940a] Running
	I1101 10:32:51.084139  456195 system_pods.go:89] "kube-proxy-ltbrb" [d1582ecb-2805-4eb7-9adb-8f3834e8ef13] Running
	I1101 10:32:51.084146  456195 system_pods.go:89] "kube-scheduler-old-k8s-version-180313" [b16813b2-809a-44d6-9867-10cb72d22182] Running
	I1101 10:32:51.084154  456195 system_pods.go:89] "storage-provisioner" [3139bac6-9550-4f9f-8077-b4f35da28974] Running
	I1101 10:32:51.084171  456195 system_pods.go:126] duration metric: took 1.606243636s to wait for k8s-apps to be running ...
	I1101 10:32:51.084183  456195 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:32:51.084245  456195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:32:51.099097  456195 system_svc.go:56] duration metric: took 14.90318ms WaitForService to wait for kubelet
	I1101 10:32:51.099126  456195 kubeadm.go:587] duration metric: took 15.93306704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:32:51.099147  456195 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:32:51.102075  456195 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:32:51.102112  456195 node_conditions.go:123] node cpu capacity is 2
	I1101 10:32:51.102126  456195 node_conditions.go:105] duration metric: took 2.973403ms to run NodePressure ...
	I1101 10:32:51.102139  456195 start.go:242] waiting for startup goroutines ...
	I1101 10:32:51.102147  456195 start.go:247] waiting for cluster config update ...
	I1101 10:32:51.102158  456195 start.go:256] writing updated cluster config ...
	I1101 10:32:51.102469  456195 ssh_runner.go:195] Run: rm -f paused
	I1101 10:32:51.106318  456195 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:32:51.110834  456195 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-ltprk" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:51.116270  456195 pod_ready.go:94] pod "coredns-5dd5756b68-ltprk" is "Ready"
	I1101 10:32:51.116300  456195 pod_ready.go:86] duration metric: took 5.442914ms for pod "coredns-5dd5756b68-ltprk" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:51.120486  456195 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-180313" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:51.125972  456195 pod_ready.go:94] pod "etcd-old-k8s-version-180313" is "Ready"
	I1101 10:32:51.126007  456195 pod_ready.go:86] duration metric: took 5.494664ms for pod "etcd-old-k8s-version-180313" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:51.129233  456195 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-180313" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:51.134964  456195 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-180313" is "Ready"
	I1101 10:32:51.134996  456195 pod_ready.go:86] duration metric: took 5.688964ms for pod "kube-apiserver-old-k8s-version-180313" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:51.142188  456195 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-180313" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:51.511046  456195 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-180313" is "Ready"
	I1101 10:32:51.511079  456195 pod_ready.go:86] duration metric: took 368.863739ms for pod "kube-controller-manager-old-k8s-version-180313" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:51.712609  456195 pod_ready.go:83] waiting for pod "kube-proxy-ltbrb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:52.110671  456195 pod_ready.go:94] pod "kube-proxy-ltbrb" is "Ready"
	I1101 10:32:52.110699  456195 pod_ready.go:86] duration metric: took 398.020755ms for pod "kube-proxy-ltbrb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:52.311238  456195 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-180313" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:52.710928  456195 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-180313" is "Ready"
	I1101 10:32:52.710956  456195 pod_ready.go:86] duration metric: took 399.695853ms for pod "kube-scheduler-old-k8s-version-180313" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:32:52.710969  456195 pod_ready.go:40] duration metric: took 1.604618188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:32:52.767149  456195 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1101 10:32:52.770300  456195 out.go:203] 
	W1101 10:32:52.773122  456195 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:32:52.776066  456195 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:32:52.780054  456195 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-180313" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:32:49 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:49.796501933Z" level=info msg="Created container 235736f1731586a1a6d6c8b7c6744e5b99e3192b610f2e4c1ae7c83831077186: kube-system/coredns-5dd5756b68-ltprk/coredns" id=791c8087-b2a4-4d8a-8465-a6e9aeff36a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:32:49 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:49.797643674Z" level=info msg="Starting container: 235736f1731586a1a6d6c8b7c6744e5b99e3192b610f2e4c1ae7c83831077186" id=8d212eb7-f9bf-4e90-8e46-f178ba4746a0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:32:49 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:49.803630003Z" level=info msg="Started container" PID=1941 containerID=235736f1731586a1a6d6c8b7c6744e5b99e3192b610f2e4c1ae7c83831077186 description=kube-system/coredns-5dd5756b68-ltprk/coredns id=8d212eb7-f9bf-4e90-8e46-f178ba4746a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aab2399c4eac88f41b1f6132623fcea82f765cb711a9c7f338da10a9e6145bff
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.324686871Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bfc9b9ff-6533-4b31-8f21-da7453418a74 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.324755179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.333809657Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:20b130499a64fc294ce862a21bda5078c489bab6f96ab0aa696a2085ff81a83e UID:e735f534-a1e8-4e99-b151-9a25498823c7 NetNS:/var/run/netns/9091874e-f8b1-4825-a2b0-de7771b82ed6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079918}] Aliases:map[]}"
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.333848131Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.342286642Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:20b130499a64fc294ce862a21bda5078c489bab6f96ab0aa696a2085ff81a83e UID:e735f534-a1e8-4e99-b151-9a25498823c7 NetNS:/var/run/netns/9091874e-f8b1-4825-a2b0-de7771b82ed6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079918}] Aliases:map[]}"
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.342465024Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.346605202Z" level=info msg="Ran pod sandbox 20b130499a64fc294ce862a21bda5078c489bab6f96ab0aa696a2085ff81a83e with infra container: default/busybox/POD" id=bfc9b9ff-6533-4b31-8f21-da7453418a74 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.347731591Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b9102391-3591-4ac7-9cb1-a54011721d29 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.347933677Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b9102391-3591-4ac7-9cb1-a54011721d29 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.347980611Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b9102391-3591-4ac7-9cb1-a54011721d29 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.348576221Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1f9aab3a-b662-418d-aeed-aedb9fff36a1 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:32:53 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:53.350786446Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:32:55 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:55.718895895Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1f9aab3a-b662-418d-aeed-aedb9fff36a1 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:32:55 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:55.721222053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=11f64b24-56cd-4067-8e4e-b670b2b91899 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:32:55 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:55.723976343Z" level=info msg="Creating container: default/busybox/busybox" id=4beb03d8-37ec-4c94-81ce-2afbc69d6550 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:32:55 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:55.724103212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:32:55 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:55.73058933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:32:55 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:55.731051883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:32:55 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:55.746875847Z" level=info msg="Created container 5041f8acdb01494d12c944a3c7e50d64251ed5b439b5f094e9e22e96f8fa63e0: default/busybox/busybox" id=4beb03d8-37ec-4c94-81ce-2afbc69d6550 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:32:55 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:55.747544335Z" level=info msg="Starting container: 5041f8acdb01494d12c944a3c7e50d64251ed5b439b5f094e9e22e96f8fa63e0" id=a8bd655e-2046-4596-a48a-6d71a141722a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:32:55 old-k8s-version-180313 crio[836]: time="2025-11-01T10:32:55.749603338Z" level=info msg="Started container" PID=1997 containerID=5041f8acdb01494d12c944a3c7e50d64251ed5b439b5f094e9e22e96f8fa63e0 description=default/busybox/busybox id=a8bd655e-2046-4596-a48a-6d71a141722a name=/runtime.v1.RuntimeService/StartContainer sandboxID=20b130499a64fc294ce862a21bda5078c489bab6f96ab0aa696a2085ff81a83e
	Nov 01 10:33:02 old-k8s-version-180313 crio[836]: time="2025-11-01T10:33:02.26858789Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	5041f8acdb014       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   20b130499a64f       busybox                                          default
	235736f173158       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   aab2399c4eac8       coredns-5dd5756b68-ltprk                         kube-system
	0c8c41112d0c9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   9c3f96b34ee1b       storage-provisioner                              kube-system
	b60cb7450d5a0       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   a15f138d71454       kindnet-2qdl9                                    kube-system
	20ddda6d90825       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      28 seconds ago      Running             kube-proxy                0                   a6adefcdc6e5c       kube-proxy-ltbrb                                 kube-system
	9285b6d67fda8       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   c103abc5ea21a       kube-controller-manager-old-k8s-version-180313   kube-system
	09812f99e1ced       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   049eba2382933       kube-scheduler-old-k8s-version-180313            kube-system
	4e59baba98416       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   af25c34aab7fc       kube-apiserver-old-k8s-version-180313            kube-system
	bbb24e6ea3626       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   1f80afcf1ac79       etcd-old-k8s-version-180313                      kube-system
	
	
	==> coredns [235736f1731586a1a6d6c8b7c6744e5b99e3192b610f2e4c1ae7c83831077186] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37597 - 10095 "HINFO IN 5370826138762014914.1093202528268390952. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021154654s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-180313
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-180313
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=old-k8s-version-180313
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_32_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:32:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-180313
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:33:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:32:53 +0000   Sat, 01 Nov 2025 10:32:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:32:53 +0000   Sat, 01 Nov 2025 10:32:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:32:53 +0000   Sat, 01 Nov 2025 10:32:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:32:53 +0000   Sat, 01 Nov 2025 10:32:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-180313
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                315a9da4-3be7-492f-b967-608664aed87a
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-ltprk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-180313                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-2qdl9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-180313             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-180313    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-ltbrb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-180313             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-180313 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-180313 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-180313 event: Registered Node old-k8s-version-180313 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-180313 status is now: NodeReady
	
	
	==> dmesg <==
	[  +4.848874] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:06] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bbb24e6ea362618b75a071ab5f90b9ef524951a0bf264e0dbdd1c7b25c8f2699] <==
	{"level":"info","ts":"2025-11-01T10:32:14.982538Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:32:14.983641Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:32:14.983475Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-01T10:32:14.983808Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-01T10:32:14.983524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-01T10:32:14.984042Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-01T10:32:14.983762Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:32:15.560528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-01T10:32:15.56065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-01T10:32:15.560691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-01T10:32:15.560745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:32:15.560778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-01T10:32:15.56083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-01T10:32:15.560863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-01T10:32:15.562409Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-180313 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:32:15.56262Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:32:15.562744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:32:15.562871Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:32:15.562924Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T10:32:15.562965Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:32:15.564178Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:32:15.573829Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:32:15.574006Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:32:15.574071Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:32:15.609907Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 10:33:04 up  2:15,  0 user,  load average: 3.19, 3.84, 2.86
	Linux old-k8s-version-180313 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b60cb7450d5a07ded32d6274f23a109e3a5eb8dc0260316ee5113c67aaae35c7] <==
	I1101 10:32:38.627160       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:32:38.627504       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:32:38.627670       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:32:38.627711       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:32:38.627758       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:32:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:32:38.918603       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:32:38.920896       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:32:38.920982       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:32:38.921139       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:32:39.121865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:32:39.121984       1 metrics.go:72] Registering metrics
	I1101 10:32:39.122081       1 controller.go:711] "Syncing nftables rules"
	I1101 10:32:48.918172       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:32:48.918214       1 main.go:301] handling current node
	I1101 10:32:58.918885       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:32:58.918925       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4e59baba98416708154e6db341d4ef86b28d52636dbbd40a0aff0e461ddda70e] <==
	I1101 10:32:19.128372       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:32:19.131132       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:32:19.134604       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 10:32:19.134942       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 10:32:19.134991       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 10:32:19.135111       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 10:32:19.147002       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:32:19.179892       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:32:19.201107       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:32:19.206523       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:32:19.871952       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:32:19.876938       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:32:19.877031       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:32:20.521059       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:32:20.576611       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:32:20.641041       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:32:20.654691       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 10:32:20.655741       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 10:32:20.660668       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:32:21.151375       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:32:22.226587       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:32:22.240774       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:32:22.256223       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 10:32:34.087263       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1101 10:32:34.828852       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9285b6d67fda82be24d4d6adec5210c3d2d5834cd6b2deae75d8415dbd0aeace] <==
	I1101 10:32:34.121075       1 shared_informer.go:318] Caches are synced for PV protection
	I1101 10:32:34.123665       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2qdl9"
	I1101 10:32:34.146645       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:32:34.228146       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:32:34.596194       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:32:34.624308       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:32:34.624338       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:32:34.834713       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1101 10:32:35.037274       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hcx7j"
	I1101 10:32:35.050988       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-ltprk"
	I1101 10:32:35.079064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="243.926857ms"
	I1101 10:32:35.120771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.645587ms"
	I1101 10:32:35.120878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.343µs"
	I1101 10:32:35.122346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.648µs"
	I1101 10:32:36.973344       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1101 10:32:37.032355       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hcx7j"
	I1101 10:32:37.044570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.153188ms"
	I1101 10:32:37.068067       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.443798ms"
	I1101 10:32:37.068145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.328µs"
	I1101 10:32:49.404670       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.171µs"
	I1101 10:32:49.428699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="141.811µs"
	I1101 10:32:50.577750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.036µs"
	I1101 10:32:50.623959       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.674419ms"
	I1101 10:32:50.624214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="38.589µs"
	I1101 10:32:53.987811       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [20ddda6d90825761bd3c8d169110f63419493a6e34c170bf3a50aed99e176ea8] <==
	I1101 10:32:35.388510       1 server_others.go:69] "Using iptables proxy"
	I1101 10:32:35.405856       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1101 10:32:35.430852       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:32:35.432826       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:32:35.432918       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:32:35.432949       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:32:35.432995       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:32:35.433248       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:32:35.438392       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:32:35.442205       1 config.go:188] "Starting service config controller"
	I1101 10:32:35.442319       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:32:35.442363       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:32:35.442404       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:32:35.444113       1 config.go:315] "Starting node config controller"
	I1101 10:32:35.444218       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:32:35.545052       1 shared_informer.go:318] Caches are synced for node config
	I1101 10:32:35.547868       1 shared_informer.go:318] Caches are synced for service config
	I1101 10:32:35.547887       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [09812f99e1ced9156f49b90b79040242e02aa171047e3e21f1833a305cfb1093] <==
	W1101 10:32:19.166474       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 10:32:19.166519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1101 10:32:19.166613       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 10:32:19.166653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 10:32:19.166747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 10:32:19.166875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 10:32:19.166987       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 10:32:19.167031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 10:32:19.174379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 10:32:19.174421       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 10:32:19.174489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 10:32:19.174509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 10:32:19.174658       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 10:32:19.174677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 10:32:19.174724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 10:32:19.174739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1101 10:32:19.174773       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 10:32:19.174788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 10:32:19.174883       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 10:32:19.174899       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:32:20.280384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 10:32:20.280533       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1101 10:32:20.298104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 10:32:20.298162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1101 10:32:20.856780       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:32:34 old-k8s-version-180313 kubelet[1379]: I1101 10:32:34.250956    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb4eda13-162f-45ae-bbf9-7e8838c8cec6-xtables-lock\") pod \"kindnet-2qdl9\" (UID: \"fb4eda13-162f-45ae-bbf9-7e8838c8cec6\") " pod="kube-system/kindnet-2qdl9"
	Nov 01 10:32:34 old-k8s-version-180313 kubelet[1379]: I1101 10:32:34.250981    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb4eda13-162f-45ae-bbf9-7e8838c8cec6-lib-modules\") pod \"kindnet-2qdl9\" (UID: \"fb4eda13-162f-45ae-bbf9-7e8838c8cec6\") " pod="kube-system/kindnet-2qdl9"
	Nov 01 10:32:34 old-k8s-version-180313 kubelet[1379]: I1101 10:32:34.251007    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww29v\" (UniqueName: \"kubernetes.io/projected/fb4eda13-162f-45ae-bbf9-7e8838c8cec6-kube-api-access-ww29v\") pod \"kindnet-2qdl9\" (UID: \"fb4eda13-162f-45ae-bbf9-7e8838c8cec6\") " pod="kube-system/kindnet-2qdl9"
	Nov 01 10:32:34 old-k8s-version-180313 kubelet[1379]: E1101 10:32:34.262513    1379 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 01 10:32:34 old-k8s-version-180313 kubelet[1379]: E1101 10:32:34.262562    1379 projected.go:198] Error preparing data for projected volume kube-api-access-swb8p for pod kube-system/kube-proxy-ltbrb: configmap "kube-root-ca.crt" not found
	Nov 01 10:32:34 old-k8s-version-180313 kubelet[1379]: E1101 10:32:34.262672    1379 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1582ecb-2805-4eb7-9adb-8f3834e8ef13-kube-api-access-swb8p podName:d1582ecb-2805-4eb7-9adb-8f3834e8ef13 nodeName:}" failed. No retries permitted until 2025-11-01 10:32:34.7626343 +0000 UTC m=+12.573241246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-swb8p" (UniqueName: "kubernetes.io/projected/d1582ecb-2805-4eb7-9adb-8f3834e8ef13-kube-api-access-swb8p") pod "kube-proxy-ltbrb" (UID: "d1582ecb-2805-4eb7-9adb-8f3834e8ef13") : configmap "kube-root-ca.crt" not found
	Nov 01 10:32:34 old-k8s-version-180313 kubelet[1379]: E1101 10:32:34.361755    1379 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 01 10:32:34 old-k8s-version-180313 kubelet[1379]: E1101 10:32:34.361944    1379 projected.go:198] Error preparing data for projected volume kube-api-access-ww29v for pod kube-system/kindnet-2qdl9: configmap "kube-root-ca.crt" not found
	Nov 01 10:32:34 old-k8s-version-180313 kubelet[1379]: E1101 10:32:34.362031    1379 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb4eda13-162f-45ae-bbf9-7e8838c8cec6-kube-api-access-ww29v podName:fb4eda13-162f-45ae-bbf9-7e8838c8cec6 nodeName:}" failed. No retries permitted until 2025-11-01 10:32:34.862002216 +0000 UTC m=+12.672609170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ww29v" (UniqueName: "kubernetes.io/projected/fb4eda13-162f-45ae-bbf9-7e8838c8cec6-kube-api-access-ww29v") pod "kindnet-2qdl9" (UID: "fb4eda13-162f-45ae-bbf9-7e8838c8cec6") : configmap "kube-root-ca.crt" not found
	Nov 01 10:32:35 old-k8s-version-180313 kubelet[1379]: W1101 10:32:35.041483    1379 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/crio-a6adefcdc6e5c558a6eb10f838f3c4040ba87eefba6f5dc63589b459eccf2178 WatchSource:0}: Error finding container a6adefcdc6e5c558a6eb10f838f3c4040ba87eefba6f5dc63589b459eccf2178: Status 404 returned error can't find the container with id a6adefcdc6e5c558a6eb10f838f3c4040ba87eefba6f5dc63589b459eccf2178
	Nov 01 10:32:39 old-k8s-version-180313 kubelet[1379]: I1101 10:32:39.552452    1379 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ltbrb" podStartSLOduration=5.552404566 podCreationTimestamp="2025-11-01 10:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:32:35.560045056 +0000 UTC m=+13.370652002" watchObservedRunningTime="2025-11-01 10:32:39.552404566 +0000 UTC m=+17.363011520"
	Nov 01 10:32:49 old-k8s-version-180313 kubelet[1379]: I1101 10:32:49.368654    1379 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 10:32:49 old-k8s-version-180313 kubelet[1379]: I1101 10:32:49.397684    1379 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2qdl9" podStartSLOduration=11.995363251 podCreationTimestamp="2025-11-01 10:32:34 +0000 UTC" firstStartedPulling="2025-11-01 10:32:35.110559869 +0000 UTC m=+12.921166814" lastFinishedPulling="2025-11-01 10:32:38.512830991 +0000 UTC m=+16.323437945" observedRunningTime="2025-11-01 10:32:39.553506241 +0000 UTC m=+17.364113186" watchObservedRunningTime="2025-11-01 10:32:49.397634382 +0000 UTC m=+27.208241336"
	Nov 01 10:32:49 old-k8s-version-180313 kubelet[1379]: I1101 10:32:49.397919    1379 topology_manager.go:215] "Topology Admit Handler" podUID="3139bac6-9550-4f9f-8077-b4f35da28974" podNamespace="kube-system" podName="storage-provisioner"
	Nov 01 10:32:49 old-k8s-version-180313 kubelet[1379]: I1101 10:32:49.403161    1379 topology_manager.go:215] "Topology Admit Handler" podUID="6f8135b4-6f5e-47af-bfd2-3f22846c0482" podNamespace="kube-system" podName="coredns-5dd5756b68-ltprk"
	Nov 01 10:32:49 old-k8s-version-180313 kubelet[1379]: I1101 10:32:49.469439    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f8135b4-6f5e-47af-bfd2-3f22846c0482-config-volume\") pod \"coredns-5dd5756b68-ltprk\" (UID: \"6f8135b4-6f5e-47af-bfd2-3f22846c0482\") " pod="kube-system/coredns-5dd5756b68-ltprk"
	Nov 01 10:32:49 old-k8s-version-180313 kubelet[1379]: I1101 10:32:49.469496    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3139bac6-9550-4f9f-8077-b4f35da28974-tmp\") pod \"storage-provisioner\" (UID: \"3139bac6-9550-4f9f-8077-b4f35da28974\") " pod="kube-system/storage-provisioner"
	Nov 01 10:32:49 old-k8s-version-180313 kubelet[1379]: I1101 10:32:49.469540    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hvh\" (UniqueName: \"kubernetes.io/projected/3139bac6-9550-4f9f-8077-b4f35da28974-kube-api-access-99hvh\") pod \"storage-provisioner\" (UID: \"3139bac6-9550-4f9f-8077-b4f35da28974\") " pod="kube-system/storage-provisioner"
	Nov 01 10:32:49 old-k8s-version-180313 kubelet[1379]: I1101 10:32:49.469565    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99l9q\" (UniqueName: \"kubernetes.io/projected/6f8135b4-6f5e-47af-bfd2-3f22846c0482-kube-api-access-99l9q\") pod \"coredns-5dd5756b68-ltprk\" (UID: \"6f8135b4-6f5e-47af-bfd2-3f22846c0482\") " pod="kube-system/coredns-5dd5756b68-ltprk"
	Nov 01 10:32:49 old-k8s-version-180313 kubelet[1379]: W1101 10:32:49.736300    1379 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/crio-aab2399c4eac88f41b1f6132623fcea82f765cb711a9c7f338da10a9e6145bff WatchSource:0}: Error finding container aab2399c4eac88f41b1f6132623fcea82f765cb711a9c7f338da10a9e6145bff: Status 404 returned error can't find the container with id aab2399c4eac88f41b1f6132623fcea82f765cb711a9c7f338da10a9e6145bff
	Nov 01 10:32:50 old-k8s-version-180313 kubelet[1379]: I1101 10:32:50.595366    1379 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ltprk" podStartSLOduration=15.595322529 podCreationTimestamp="2025-11-01 10:32:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:32:50.575706487 +0000 UTC m=+28.386313433" watchObservedRunningTime="2025-11-01 10:32:50.595322529 +0000 UTC m=+28.405929475"
	Nov 01 10:32:50 old-k8s-version-180313 kubelet[1379]: I1101 10:32:50.611556    1379 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.611513611 podCreationTimestamp="2025-11-01 10:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:32:50.596491512 +0000 UTC m=+28.407098474" watchObservedRunningTime="2025-11-01 10:32:50.611513611 +0000 UTC m=+28.422120556"
	Nov 01 10:32:53 old-k8s-version-180313 kubelet[1379]: I1101 10:32:53.022097    1379 topology_manager.go:215] "Topology Admit Handler" podUID="e735f534-a1e8-4e99-b151-9a25498823c7" podNamespace="default" podName="busybox"
	Nov 01 10:32:53 old-k8s-version-180313 kubelet[1379]: I1101 10:32:53.090616    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwn4l\" (UniqueName: \"kubernetes.io/projected/e735f534-a1e8-4e99-b151-9a25498823c7-kube-api-access-vwn4l\") pod \"busybox\" (UID: \"e735f534-a1e8-4e99-b151-9a25498823c7\") " pod="default/busybox"
	Nov 01 10:32:53 old-k8s-version-180313 kubelet[1379]: W1101 10:32:53.346402    1379 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/crio-20b130499a64fc294ce862a21bda5078c489bab6f96ab0aa696a2085ff81a83e WatchSource:0}: Error finding container 20b130499a64fc294ce862a21bda5078c489bab6f96ab0aa696a2085ff81a83e: Status 404 returned error can't find the container with id 20b130499a64fc294ce862a21bda5078c489bab6f96ab0aa696a2085ff81a83e
	
	
	==> storage-provisioner [0c8c41112d0c939d0e78e7c5e6091c0817563eeb2109bf72e164210f89700144] <==
	I1101 10:32:49.774474       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:32:49.790176       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:32:49.790221       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:32:49.800390       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:32:49.800558       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-180313_5a4f1f29-5b08-4e41-a477-3bce0d304d25!
	I1101 10:32:49.802495       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f6ac2f7-82d3-49b9-9a4c-13a56b4eb794", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-180313_5a4f1f29-5b08-4e41-a477-3bce0d304d25 became leader
	I1101 10:32:49.901144       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-180313_5a4f1f29-5b08-4e41-a477-3bce0d304d25!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-180313 -n old-k8s-version-180313
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-180313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-180313 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-180313 --alsologtostderr -v=1: exit status 80 (2.576553667s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-180313 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:34:19.059341  462393 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:34:19.059556  462393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:34:19.059588  462393 out.go:374] Setting ErrFile to fd 2...
	I1101 10:34:19.059609  462393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:34:19.059873  462393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:34:19.060159  462393 out.go:368] Setting JSON to false
	I1101 10:34:19.060221  462393 mustload.go:66] Loading cluster: old-k8s-version-180313
	I1101 10:34:19.060635  462393 config.go:182] Loaded profile config "old-k8s-version-180313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:34:19.061162  462393 cli_runner.go:164] Run: docker container inspect old-k8s-version-180313 --format={{.State.Status}}
	I1101 10:34:19.084096  462393 host.go:66] Checking if "old-k8s-version-180313" exists ...
	I1101 10:34:19.084430  462393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:34:19.180342  462393 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 10:34:19.170135627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:34:19.181239  462393 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-180313 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:34:19.184677  462393 out.go:179] * Pausing node old-k8s-version-180313 ... 
	I1101 10:34:19.188490  462393 host.go:66] Checking if "old-k8s-version-180313" exists ...
	I1101 10:34:19.188912  462393 ssh_runner.go:195] Run: systemctl --version
	I1101 10:34:19.188968  462393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180313
	I1101 10:34:19.211346  462393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/old-k8s-version-180313/id_rsa Username:docker}
	I1101 10:34:19.317199  462393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:34:19.352320  462393 pause.go:52] kubelet running: true
	I1101 10:34:19.352391  462393 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:34:19.699629  462393 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:34:19.699744  462393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:34:19.824038  462393 cri.go:89] found id: "b7f346f64193604fa373e321cc06889057058a09b32bd43aaeff438939dc1eca"
	I1101 10:34:19.824121  462393 cri.go:89] found id: "ddeeb83c620307d11f123bac2aa9499fb43e2a9c7406a2c998952da43aad6bfa"
	I1101 10:34:19.824140  462393 cri.go:89] found id: "0ce288ea2210149663378329c5a02b5fd6174c052665e287644f3a46a6df08f7"
	I1101 10:34:19.824162  462393 cri.go:89] found id: "e34ebc504a578f431ae701279e46598b5704c72d1af12964f1662589246f169c"
	I1101 10:34:19.824197  462393 cri.go:89] found id: "637fc39de58e802e58e0a2de44f07dad9e0d27382ddb64ee5b22a8c3b6a4584a"
	I1101 10:34:19.824222  462393 cri.go:89] found id: "bbb2ffd94dc5362517e75879e833273c6d849a640ba961071b27a88cf786f508"
	I1101 10:34:19.824241  462393 cri.go:89] found id: "ee76c6ed75d1e26dfa7a963bf48a1d032962e8b362b818a13d5814aefecdc9df"
	I1101 10:34:19.824261  462393 cri.go:89] found id: "527c1aae77a9ef7d7753fee214b43dddb0b3ba83158c2de982968514735a6e82"
	I1101 10:34:19.824296  462393 cri.go:89] found id: "4e0b8f9a18f71411eace0341504ba546aebc0d91bdd8bc805e54ead023a3c60c"
	I1101 10:34:19.824316  462393 cri.go:89] found id: "c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72"
	I1101 10:34:19.824336  462393 cri.go:89] found id: "e7f9c82d186de380ac4c95709a6f4e841288f59b2f20cc353cb533bbe34ae795"
	I1101 10:34:19.824376  462393 cri.go:89] found id: ""
	I1101 10:34:19.824478  462393 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:34:19.847752  462393 retry.go:31] will retry after 293.537359ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:34:19Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:34:20.142245  462393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:34:20.162248  462393 pause.go:52] kubelet running: false
	I1101 10:34:20.162343  462393 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:34:20.407510  462393 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:34:20.407613  462393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:34:20.511290  462393 cri.go:89] found id: "b7f346f64193604fa373e321cc06889057058a09b32bd43aaeff438939dc1eca"
	I1101 10:34:20.511324  462393 cri.go:89] found id: "ddeeb83c620307d11f123bac2aa9499fb43e2a9c7406a2c998952da43aad6bfa"
	I1101 10:34:20.511330  462393 cri.go:89] found id: "0ce288ea2210149663378329c5a02b5fd6174c052665e287644f3a46a6df08f7"
	I1101 10:34:20.511334  462393 cri.go:89] found id: "e34ebc504a578f431ae701279e46598b5704c72d1af12964f1662589246f169c"
	I1101 10:34:20.511338  462393 cri.go:89] found id: "637fc39de58e802e58e0a2de44f07dad9e0d27382ddb64ee5b22a8c3b6a4584a"
	I1101 10:34:20.511342  462393 cri.go:89] found id: "bbb2ffd94dc5362517e75879e833273c6d849a640ba961071b27a88cf786f508"
	I1101 10:34:20.511346  462393 cri.go:89] found id: "ee76c6ed75d1e26dfa7a963bf48a1d032962e8b362b818a13d5814aefecdc9df"
	I1101 10:34:20.511349  462393 cri.go:89] found id: "527c1aae77a9ef7d7753fee214b43dddb0b3ba83158c2de982968514735a6e82"
	I1101 10:34:20.511352  462393 cri.go:89] found id: "4e0b8f9a18f71411eace0341504ba546aebc0d91bdd8bc805e54ead023a3c60c"
	I1101 10:34:20.511368  462393 cri.go:89] found id: "c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72"
	I1101 10:34:20.511372  462393 cri.go:89] found id: "e7f9c82d186de380ac4c95709a6f4e841288f59b2f20cc353cb533bbe34ae795"
	I1101 10:34:20.511382  462393 cri.go:89] found id: ""
	I1101 10:34:20.511439  462393 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:34:20.526631  462393 retry.go:31] will retry after 396.317585ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:34:20Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:34:20.923165  462393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:34:20.939852  462393 pause.go:52] kubelet running: false
	I1101 10:34:20.939912  462393 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:34:21.319858  462393 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:34:21.319932  462393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:34:21.489818  462393 cri.go:89] found id: "b7f346f64193604fa373e321cc06889057058a09b32bd43aaeff438939dc1eca"
	I1101 10:34:21.489840  462393 cri.go:89] found id: "ddeeb83c620307d11f123bac2aa9499fb43e2a9c7406a2c998952da43aad6bfa"
	I1101 10:34:21.489845  462393 cri.go:89] found id: "0ce288ea2210149663378329c5a02b5fd6174c052665e287644f3a46a6df08f7"
	I1101 10:34:21.489848  462393 cri.go:89] found id: "e34ebc504a578f431ae701279e46598b5704c72d1af12964f1662589246f169c"
	I1101 10:34:21.489851  462393 cri.go:89] found id: "637fc39de58e802e58e0a2de44f07dad9e0d27382ddb64ee5b22a8c3b6a4584a"
	I1101 10:34:21.489855  462393 cri.go:89] found id: "bbb2ffd94dc5362517e75879e833273c6d849a640ba961071b27a88cf786f508"
	I1101 10:34:21.489858  462393 cri.go:89] found id: "ee76c6ed75d1e26dfa7a963bf48a1d032962e8b362b818a13d5814aefecdc9df"
	I1101 10:34:21.489861  462393 cri.go:89] found id: "527c1aae77a9ef7d7753fee214b43dddb0b3ba83158c2de982968514735a6e82"
	I1101 10:34:21.489864  462393 cri.go:89] found id: "4e0b8f9a18f71411eace0341504ba546aebc0d91bdd8bc805e54ead023a3c60c"
	I1101 10:34:21.489870  462393 cri.go:89] found id: "c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72"
	I1101 10:34:21.489874  462393 cri.go:89] found id: "e7f9c82d186de380ac4c95709a6f4e841288f59b2f20cc353cb533bbe34ae795"
	I1101 10:34:21.489877  462393 cri.go:89] found id: ""
	I1101 10:34:21.489927  462393 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:34:21.527125  462393 out.go:203] 
	W1101 10:34:21.530412  462393 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:34:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:34:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:34:21.530441  462393 out.go:285] * 
	* 
	W1101 10:34:21.542416  462393 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:34:21.545888  462393 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-180313 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-180313
helpers_test.go:243: (dbg) docker inspect old-k8s-version-180313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1",
	        "Created": "2025-11-01T10:31:56.175953746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 459930,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:33:17.401971457Z",
	            "FinishedAt": "2025-11-01T10:33:16.550600918Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/hosts",
	        "LogPath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1-json.log",
	        "Name": "/old-k8s-version-180313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-180313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-180313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1",
	                "LowerDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-180313",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-180313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-180313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-180313",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-180313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "914f7f251ae08ccd4e7afbde9a9cf923f7630c69927615c4d252d39f8cdb055a",
	            "SandboxKey": "/var/run/docker/netns/914f7f251ae0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-180313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:5b:61:b0:1e:9f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "166ca61202b04ec7e10cf51d0a2cefb4328ec9285bf6b5c3a38e12ab732f4c8c",
	                    "EndpointID": "c88ff27a75647acdbd29b10b1746e9e9d7cb153ea59b8ed62565df65db62e83d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-180313",
	                        "d94f4283ef92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180313 -n old-k8s-version-180313
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180313 -n old-k8s-version-180313: exit status 2 (689.148951ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-180313 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-180313 logs -n 25: (2.292599145s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-220636 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo containerd config dump                                                                                                                                                                                                  │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo crio config                                                                                                                                                                                                             │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ delete  │ -p cilium-220636                                                                                                                                                                                                                              │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:30 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p force-systemd-env-065424                                                                                                                                                                                                                   │ force-systemd-env-065424 │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p cert-options-082900 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ ssh     │ cert-options-082900 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ ssh     │ -p cert-options-082900 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p cert-options-082900                                                                                                                                                                                                                        │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │                     │
	│ stop    │ -p old-k8s-version-180313 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-180313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	│ image   │ old-k8s-version-180313 image list --format=json                                                                                                                                                                                               │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ pause   │ -p old-k8s-version-180313 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:34:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:34:11.136320  461914 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:34:11.136478  461914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:34:11.136482  461914 out.go:374] Setting ErrFile to fd 2...
	I1101 10:34:11.136486  461914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:34:11.136839  461914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:34:11.137298  461914 out.go:368] Setting JSON to false
	I1101 10:34:11.138473  461914 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8201,"bootTime":1761985051,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:34:11.138545  461914 start.go:143] virtualization:  
	I1101 10:34:11.142000  461914 out.go:179] * [cert-expiration-459318] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:34:11.145912  461914 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:34:11.145979  461914 notify.go:221] Checking for updates...
	I1101 10:34:11.148974  461914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:34:11.151943  461914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:34:11.154785  461914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:34:11.157983  461914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:34:11.161137  461914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:34:11.164575  461914 config.go:182] Loaded profile config "cert-expiration-459318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:34:11.165167  461914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:34:11.194287  461914 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:34:11.194400  461914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:34:11.268930  461914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:34:11.253057956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:34:11.269068  461914 docker.go:319] overlay module found
	I1101 10:34:11.272230  461914 out.go:179] * Using the docker driver based on existing profile
	I1101 10:34:11.275154  461914 start.go:309] selected driver: docker
	I1101 10:34:11.275165  461914 start.go:930] validating driver "docker" against &{Name:cert-expiration-459318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-459318 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:34:11.275271  461914 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:34:11.276036  461914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:34:11.340390  461914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:34:11.330626211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:34:11.340784  461914 cni.go:84] Creating CNI manager for ""
	I1101 10:34:11.340836  461914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:34:11.340885  461914 start.go:353] cluster config:
	{Name:cert-expiration-459318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-459318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1101 10:34:11.344159  461914 out.go:179] * Starting "cert-expiration-459318" primary control-plane node in "cert-expiration-459318" cluster
	I1101 10:34:11.347157  461914 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:34:11.350141  461914 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:34:11.353178  461914 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:34:11.353132  461914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:34:11.353253  461914 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:34:11.353262  461914 cache.go:59] Caching tarball of preloaded images
	I1101 10:34:11.353353  461914 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:34:11.353359  461914 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:34:11.353479  461914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/config.json ...
	I1101 10:34:11.373096  461914 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:34:11.373107  461914 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:34:11.373124  461914 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:34:11.373145  461914 start.go:360] acquireMachinesLock for cert-expiration-459318: {Name:mk96f545b8c3406a32675a71039ef54c1b79a501 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:34:11.373214  461914 start.go:364] duration metric: took 49.658µs to acquireMachinesLock for "cert-expiration-459318"
	I1101 10:34:11.373234  461914 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:34:11.373244  461914 fix.go:54] fixHost starting: 
	I1101 10:34:11.373517  461914 cli_runner.go:164] Run: docker container inspect cert-expiration-459318 --format={{.State.Status}}
	I1101 10:34:11.392766  461914 fix.go:112] recreateIfNeeded on cert-expiration-459318: state=Running err=<nil>
	W1101 10:34:11.392786  461914 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:34:11.395968  461914 out.go:252] * Updating the running docker "cert-expiration-459318" container ...
	I1101 10:34:11.395991  461914 machine.go:94] provisionDockerMachine start ...
	I1101 10:34:11.396085  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:11.418655  461914 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:11.419006  461914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1101 10:34:11.419013  461914 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:34:11.570629  461914 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-459318
	
	I1101 10:34:11.570644  461914 ubuntu.go:182] provisioning hostname "cert-expiration-459318"
	I1101 10:34:11.570720  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:11.592089  461914 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:11.592412  461914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1101 10:34:11.592421  461914 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-459318 && echo "cert-expiration-459318" | sudo tee /etc/hostname
	I1101 10:34:11.753055  461914 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-459318
	
	I1101 10:34:11.753143  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:11.775079  461914 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:11.775385  461914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1101 10:34:11.775400  461914 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-459318' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-459318/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-459318' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:34:11.934157  461914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:34:11.934172  461914 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:34:11.934193  461914 ubuntu.go:190] setting up certificates
	I1101 10:34:11.934210  461914 provision.go:84] configureAuth start
	I1101 10:34:11.934274  461914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-459318
	I1101 10:34:11.953015  461914 provision.go:143] copyHostCerts
	I1101 10:34:11.953085  461914 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:34:11.953099  461914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:34:11.953176  461914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:34:11.953296  461914 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:34:11.953300  461914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:34:11.953328  461914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:34:11.953390  461914 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:34:11.953393  461914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:34:11.953417  461914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:34:11.953469  461914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-459318 san=[127.0.0.1 192.168.85.2 cert-expiration-459318 localhost minikube]
	I1101 10:34:12.638859  461914 provision.go:177] copyRemoteCerts
	I1101 10:34:12.638912  461914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:34:12.638951  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:12.664410  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:12.775656  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:34:12.796838  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:34:12.817785  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:34:12.839101  461914 provision.go:87] duration metric: took 904.87974ms to configureAuth
	I1101 10:34:12.839119  461914 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:34:12.839312  461914 config.go:182] Loaded profile config "cert-expiration-459318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:34:12.839420  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:12.870016  461914 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:12.870339  461914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1101 10:34:12.870352  461914 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:34:18.240383  461914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:34:18.240396  461914 machine.go:97] duration metric: took 6.844398143s to provisionDockerMachine
	I1101 10:34:18.240405  461914 start.go:293] postStartSetup for "cert-expiration-459318" (driver="docker")
	I1101 10:34:18.240415  461914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:34:18.240471  461914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:34:18.240510  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:18.264766  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:18.377840  461914 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:34:18.381523  461914 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:34:18.381542  461914 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:34:18.381551  461914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:34:18.381605  461914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:34:18.381688  461914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:34:18.381826  461914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:34:18.389810  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:34:18.408463  461914 start.go:296] duration metric: took 168.042395ms for postStartSetup
	I1101 10:34:18.408534  461914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:34:18.408591  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:18.426329  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:18.535430  461914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:34:18.541123  461914 fix.go:56] duration metric: took 7.167877103s for fixHost
	I1101 10:34:18.541138  461914 start.go:83] releasing machines lock for "cert-expiration-459318", held for 7.167915808s
	I1101 10:34:18.541211  461914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-459318
	I1101 10:34:18.559110  461914 ssh_runner.go:195] Run: cat /version.json
	I1101 10:34:18.559153  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:18.559405  461914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:34:18.559461  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:18.588276  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:18.588276  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:18.853440  461914 ssh_runner.go:195] Run: systemctl --version
	I1101 10:34:18.863202  461914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:34:18.927232  461914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:34:18.938785  461914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:34:18.938847  461914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:34:18.950058  461914 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:34:18.950072  461914 start.go:496] detecting cgroup driver to use...
	I1101 10:34:18.950104  461914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:34:18.950156  461914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:34:18.974965  461914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:34:19.000828  461914 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:34:19.000893  461914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:34:19.018627  461914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:34:19.033341  461914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:34:19.250126  461914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:34:19.456757  461914 docker.go:234] disabling docker service ...
	I1101 10:34:19.456816  461914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:34:19.489169  461914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:34:19.504867  461914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:34:19.696935  461914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:34:19.886289  461914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:34:19.900078  461914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:34:19.918083  461914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:34:19.918139  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.930046  461914 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:34:19.930110  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.940860  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.951689  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.962197  461914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:34:19.972606  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.983227  461914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.992248  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:20.004082  461914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:34:20.017147  461914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:34:20.026269  461914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:34:20.196501  461914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:34:20.454266  461914 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:34:20.454329  461914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:34:20.462193  461914 start.go:564] Will wait 60s for crictl version
	I1101 10:34:20.462247  461914 ssh_runner.go:195] Run: which crictl
	I1101 10:34:20.467542  461914 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:34:20.507802  461914 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:34:20.507896  461914 ssh_runner.go:195] Run: crio --version
	I1101 10:34:20.545024  461914 ssh_runner.go:195] Run: crio --version
	I1101 10:34:20.579207  461914 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:34:20.582252  461914 cli_runner.go:164] Run: docker network inspect cert-expiration-459318 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:34:20.600011  461914 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:34:20.604341  461914 kubeadm.go:884] updating cluster {Name:cert-expiration-459318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-459318 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:34:20.604446  461914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:34:20.604501  461914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:34:20.640197  461914 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:34:20.640208  461914 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:34:20.640263  461914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:34:20.666147  461914 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:34:20.666160  461914 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:34:20.666167  461914 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:34:20.666295  461914 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-459318 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-459318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:34:20.666378  461914 ssh_runner.go:195] Run: crio config
	I1101 10:34:20.744653  461914 cni.go:84] Creating CNI manager for ""
	I1101 10:34:20.744664  461914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:34:20.744684  461914 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:34:20.744720  461914 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-459318 NodeName:cert-expiration-459318 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:34:20.744855  461914 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-459318"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:34:20.744930  461914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:34:20.756250  461914 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:34:20.756319  461914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:34:20.764709  461914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:34:20.778601  461914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:34:20.792489  461914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1101 10:34:20.806316  461914 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:34:20.810204  461914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:34:20.964210  461914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:34:20.979030  461914 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318 for IP: 192.168.85.2
	I1101 10:34:20.979060  461914 certs.go:195] generating shared ca certs ...
	I1101 10:34:20.979076  461914 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:20.979214  461914 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:34:20.979256  461914 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:34:20.979261  461914 certs.go:257] generating profile certs ...
	W1101 10:34:20.979373  461914 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1101 10:34:20.979594  461914 certs.go:624] cert expired /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/client.crt: expiration: 2025-11-01 10:33:43 +0000 UTC, now: 2025-11-01 10:34:20.979587449 +0000 UTC m=+9.889791633
	I1101 10:34:20.979709  461914 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/client.key
	I1101 10:34:20.979726  461914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/client.crt with IP's: []
	
	
	==> CRI-O <==
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.468198074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.474700282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.475552239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.49220443Z" level=info msg="Created container c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p/dashboard-metrics-scraper" id=83d86006-5be3-42fa-9d9e-40e41d65f64b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.494548721Z" level=info msg="Starting container: c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72" id=6eecc742-dbde-4150-99d2-c2f720cedc3a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.496419414Z" level=info msg="Started container" PID=1630 containerID=c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p/dashboard-metrics-scraper id=6eecc742-dbde-4150-99d2-c2f720cedc3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=0105e472aab020673e89f44c64000129856f537e0c25cd458a68d4cb34fb2724
	Nov 01 10:34:05 old-k8s-version-180313 conmon[1628]: conmon c70e29e0a4b7c8c90f84 <ninfo>: container 1630 exited with status 1
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.795618888Z" level=info msg="Removing container: 1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974" id=38074d01-fc49-4068-a261-e9a0e87b1088 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.806165642Z" level=info msg="Error loading conmon cgroup of container 1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974: cgroup deleted" id=38074d01-fc49-4068-a261-e9a0e87b1088 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.809204342Z" level=info msg="Removed container 1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p/dashboard-metrics-scraper" id=38074d01-fc49-4068-a261-e9a0e87b1088 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.422807127Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.438034594Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.438200619Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.438276632Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.45169986Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.451915543Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.452022219Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.462890314Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.463055067Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.463134059Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.47228617Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.472452654Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.472536421Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.48606027Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.48627345Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c70e29e0a4b7c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   0105e472aab02       dashboard-metrics-scraper-5f989dc9cf-d8s5p       kubernetes-dashboard
	b7f346f641936       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   d153843c1b6a2       storage-provisioner                              kube-system
	e7f9c82d186de       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   426ea044e54ae       kubernetes-dashboard-8694d4445c-wt2nm            kubernetes-dashboard
	ddeeb83c62030       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   add87bc476aed       coredns-5dd5756b68-ltprk                         kube-system
	7cc4abc01044d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   5ac3561d0e5f0       busybox                                          default
	0ce288ea22101       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   a52280133e87e       kube-proxy-ltbrb                                 kube-system
	e34ebc504a578       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   d153843c1b6a2       storage-provisioner                              kube-system
	637fc39de58e8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   3992fcfe5f9fa       kindnet-2qdl9                                    kube-system
	bbb2ffd94dc53       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   fd7bd299d2d69       kube-controller-manager-old-k8s-version-180313   kube-system
	ee76c6ed75d1e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   8eb6c8fc170e6       etcd-old-k8s-version-180313                      kube-system
	527c1aae77a9e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   4ede815b41d7b       kube-apiserver-old-k8s-version-180313            kube-system
	4e0b8f9a18f71       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   f50e7705efe83       kube-scheduler-old-k8s-version-180313            kube-system
	
	
	==> coredns [ddeeb83c620307d11f123bac2aa9499fb43e2a9c7406a2c998952da43aad6bfa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39958 - 47129 "HINFO IN 8504101705225528951.6274424222166963424. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021155622s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-180313
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-180313
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=old-k8s-version-180313
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_32_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:32:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-180313
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:34:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:34:01 +0000   Sat, 01 Nov 2025 10:32:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:34:01 +0000   Sat, 01 Nov 2025 10:32:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:34:01 +0000   Sat, 01 Nov 2025 10:32:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:34:01 +0000   Sat, 01 Nov 2025 10:32:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-180313
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                315a9da4-3be7-492f-b967-608664aed87a
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-ltprk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-180313                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-2qdl9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-180313             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-180313    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-ltbrb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-180313             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-d8s5p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-wt2nm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-180313 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-180313 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s                 node-controller  Node old-k8s-version-180313 event: Registered Node old-k8s-version-180313 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-180313 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node old-k8s-version-180313 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                  node-controller  Node old-k8s-version-180313 event: Registered Node old-k8s-version-180313 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:06] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ee76c6ed75d1e26dfa7a963bf48a1d032962e8b362b818a13d5814aefecdc9df] <==
	{"level":"info","ts":"2025-11-01T10:33:25.218168Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:33:25.218202Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:33:25.218439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-01T10:33:25.218533Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-01T10:33:25.218674Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:33:25.218747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:33:25.232527Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-01T10:33:25.233128Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-01T10:33:25.232863Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:33:25.23336Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:33:25.23342Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:33:26.704507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T10:33:26.704628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:33:26.704684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-01T10:33:26.704723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T10:33:26.704754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-01T10:33:26.704791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T10:33:26.704824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-01T10:33:26.708463Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-180313 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:33:26.708647Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:33:26.711865Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:33:26.711994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:33:26.715516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-01T10:33:26.719952Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:33:26.737776Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:34:23 up  2:16,  0 user,  load average: 3.41, 3.61, 2.86
	Linux old-k8s-version-180313 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [637fc39de58e802e58e0a2de44f07dad9e0d27382ddb64ee5b22a8c3b6a4584a] <==
	I1101 10:33:32.218546       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:33:32.218767       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:33:32.218898       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:33:32.218915       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:33:32.218928       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:33:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:33:32.419471       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:33:32.419488       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:33:32.419497       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:33:32.419626       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:34:02.419888       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 10:34:02.419889       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:34:02.420126       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:34:02.420229       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 10:34:03.620077       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:34:03.620210       1 metrics.go:72] Registering metrics
	I1101 10:34:03.620340       1 controller.go:711] "Syncing nftables rules"
	I1101 10:34:12.421757       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:34:12.421814       1 main.go:301] handling current node
	I1101 10:34:22.426184       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:34:22.426223       1 main.go:301] handling current node
	
	
	==> kube-apiserver [527c1aae77a9ef7d7753fee214b43dddb0b3ba83158c2de982968514735a6e82] <==
	I1101 10:33:30.615322       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1101 10:33:30.644283       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 10:33:30.644360       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:33:30.644366       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:33:30.644372       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:33:30.690669       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:33:30.716234       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 10:33:30.717477       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 10:33:30.718362       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:33:30.730901       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:33:30.733193       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:33:30.739199       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1101 10:33:30.741548       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:33:30.744959       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:33:31.581205       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:33:32.412114       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:33:32.455482       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:33:32.482727       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:33:32.495444       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:33:32.505390       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:33:32.560280       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.236.181"}
	I1101 10:33:32.576980       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.166.96"}
	I1101 10:33:43.479432       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 10:33:43.488634       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:33:43.639201       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [bbb2ffd94dc5362517e75879e833273c6d849a640ba961071b27a88cf786f508] <==
	I1101 10:33:43.580261       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-d8s5p"
	I1101 10:33:43.581125       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-wt2nm"
	I1101 10:33:43.581804       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1101 10:33:43.583961       1 shared_informer.go:318] Caches are synced for endpoint
	I1101 10:33:43.597264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="107.244203ms"
	I1101 10:33:43.610915       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1101 10:33:43.614621       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:33:43.624735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="126.82163ms"
	I1101 10:33:43.645012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.129812ms"
	I1101 10:33:43.646040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.097µs"
	I1101 10:33:43.651226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.906314ms"
	I1101 10:33:43.651406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.933µs"
	I1101 10:33:43.665317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.629µs"
	I1101 10:33:43.994562       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:33:44.014933       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:33:44.014992       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:33:48.751417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.595µs"
	I1101 10:33:49.771097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.069µs"
	I1101 10:33:50.772273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.492µs"
	I1101 10:33:53.789462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.242073ms"
	I1101 10:33:53.790295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="37.531µs"
	I1101 10:34:04.655557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.300223ms"
	I1101 10:34:04.655661       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.507µs"
	I1101 10:34:05.815521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.475µs"
	I1101 10:34:13.904715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.212µs"
	
	
	==> kube-proxy [0ce288ea2210149663378329c5a02b5fd6174c052665e287644f3a46a6df08f7] <==
	I1101 10:33:32.190348       1 server_others.go:69] "Using iptables proxy"
	I1101 10:33:32.256896       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1101 10:33:32.282639       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:33:32.284855       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:33:32.285064       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:33:32.285103       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:33:32.285163       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:33:32.285441       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:33:32.285954       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:32.286835       1 config.go:188] "Starting service config controller"
	I1101 10:33:32.286934       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:33:32.286987       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:33:32.287021       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:33:32.287652       1 config.go:315] "Starting node config controller"
	I1101 10:33:32.287738       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:33:32.387593       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 10:33:32.387599       1 shared_informer.go:318] Caches are synced for service config
	I1101 10:33:32.387847       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4e0b8f9a18f71411eace0341504ba546aebc0d91bdd8bc805e54ead023a3c60c] <==
	I1101 10:33:27.737415       1 serving.go:348] Generated self-signed cert in-memory
	W1101 10:33:30.556326       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:33:30.556424       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:33:30.556458       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:33:30.556518       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:33:30.666230       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 10:33:30.666268       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:30.670105       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 10:33:30.670200       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 10:33:30.670772       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:30.670835       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W1101 10:33:30.702465       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1101 10:33:30.702584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1101 10:33:30.702736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1101 10:33:30.702805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1101 10:33:30.702950       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1101 10:33:30.702988       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1101 10:33:30.703293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1101 10:33:30.703366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1101 10:33:30.703882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1101 10:33:30.703958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1101 10:33:30.704066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1101 10:33:30.704647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	I1101 10:33:30.777819       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:33:43 old-k8s-version-180313 kubelet[773]: I1101 10:33:43.616530     773 topology_manager.go:215] "Topology Admit Handler" podUID="954439ef-73b3-44b2-bf87-2f7761a1c85b" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-wt2nm"
	Nov 01 10:33:43 old-k8s-version-180313 kubelet[773]: I1101 10:33:43.684001     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fa887d66-d751-4945-bdd3-79f83ba6a844-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-d8s5p\" (UID: \"fa887d66-d751-4945-bdd3-79f83ba6a844\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p"
	Nov 01 10:33:43 old-k8s-version-180313 kubelet[773]: I1101 10:33:43.684100     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/954439ef-73b3-44b2-bf87-2f7761a1c85b-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-wt2nm\" (UID: \"954439ef-73b3-44b2-bf87-2f7761a1c85b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wt2nm"
	Nov 01 10:33:43 old-k8s-version-180313 kubelet[773]: I1101 10:33:43.684140     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dr6w\" (UniqueName: \"kubernetes.io/projected/fa887d66-d751-4945-bdd3-79f83ba6a844-kube-api-access-6dr6w\") pod \"dashboard-metrics-scraper-5f989dc9cf-d8s5p\" (UID: \"fa887d66-d751-4945-bdd3-79f83ba6a844\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p"
	Nov 01 10:33:43 old-k8s-version-180313 kubelet[773]: I1101 10:33:43.684189     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4m8\" (UniqueName: \"kubernetes.io/projected/954439ef-73b3-44b2-bf87-2f7761a1c85b-kube-api-access-5m4m8\") pod \"kubernetes-dashboard-8694d4445c-wt2nm\" (UID: \"954439ef-73b3-44b2-bf87-2f7761a1c85b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wt2nm"
	Nov 01 10:33:48 old-k8s-version-180313 kubelet[773]: I1101 10:33:48.735513     773 scope.go:117] "RemoveContainer" containerID="e5345d093f08b8a84ddc0f861202ae64b8befe001fa68e5f79d690524c5b4794"
	Nov 01 10:33:49 old-k8s-version-180313 kubelet[773]: I1101 10:33:49.743436     773 scope.go:117] "RemoveContainer" containerID="e5345d093f08b8a84ddc0f861202ae64b8befe001fa68e5f79d690524c5b4794"
	Nov 01 10:33:49 old-k8s-version-180313 kubelet[773]: I1101 10:33:49.743741     773 scope.go:117] "RemoveContainer" containerID="1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974"
	Nov 01 10:33:49 old-k8s-version-180313 kubelet[773]: E1101 10:33:49.744014     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8s5p_kubernetes-dashboard(fa887d66-d751-4945-bdd3-79f83ba6a844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p" podUID="fa887d66-d751-4945-bdd3-79f83ba6a844"
	Nov 01 10:33:50 old-k8s-version-180313 kubelet[773]: I1101 10:33:50.750897     773 scope.go:117] "RemoveContainer" containerID="1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974"
	Nov 01 10:33:50 old-k8s-version-180313 kubelet[773]: E1101 10:33:50.751165     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8s5p_kubernetes-dashboard(fa887d66-d751-4945-bdd3-79f83ba6a844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p" podUID="fa887d66-d751-4945-bdd3-79f83ba6a844"
	Nov 01 10:33:53 old-k8s-version-180313 kubelet[773]: I1101 10:33:53.889339     773 scope.go:117] "RemoveContainer" containerID="1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974"
	Nov 01 10:33:53 old-k8s-version-180313 kubelet[773]: E1101 10:33:53.889738     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8s5p_kubernetes-dashboard(fa887d66-d751-4945-bdd3-79f83ba6a844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p" podUID="fa887d66-d751-4945-bdd3-79f83ba6a844"
	Nov 01 10:34:02 old-k8s-version-180313 kubelet[773]: I1101 10:34:02.782401     773 scope.go:117] "RemoveContainer" containerID="e34ebc504a578f431ae701279e46598b5704c72d1af12964f1662589246f169c"
	Nov 01 10:34:02 old-k8s-version-180313 kubelet[773]: I1101 10:34:02.809127     773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wt2nm" podStartSLOduration=10.64563652 podCreationTimestamp="2025-11-01 10:33:43 +0000 UTC" firstStartedPulling="2025-11-01 10:33:43.942448538 +0000 UTC m=+19.618720369" lastFinishedPulling="2025-11-01 10:33:53.105378355 +0000 UTC m=+28.781650185" observedRunningTime="2025-11-01 10:33:53.775278631 +0000 UTC m=+29.451550470" watchObservedRunningTime="2025-11-01 10:34:02.808566336 +0000 UTC m=+38.484838175"
	Nov 01 10:34:05 old-k8s-version-180313 kubelet[773]: I1101 10:34:05.465467     773 scope.go:117] "RemoveContainer" containerID="1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974"
	Nov 01 10:34:05 old-k8s-version-180313 kubelet[773]: I1101 10:34:05.793366     773 scope.go:117] "RemoveContainer" containerID="1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974"
	Nov 01 10:34:05 old-k8s-version-180313 kubelet[773]: I1101 10:34:05.793756     773 scope.go:117] "RemoveContainer" containerID="c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72"
	Nov 01 10:34:05 old-k8s-version-180313 kubelet[773]: E1101 10:34:05.794075     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8s5p_kubernetes-dashboard(fa887d66-d751-4945-bdd3-79f83ba6a844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p" podUID="fa887d66-d751-4945-bdd3-79f83ba6a844"
	Nov 01 10:34:13 old-k8s-version-180313 kubelet[773]: I1101 10:34:13.889165     773 scope.go:117] "RemoveContainer" containerID="c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72"
	Nov 01 10:34:13 old-k8s-version-180313 kubelet[773]: E1101 10:34:13.890263     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8s5p_kubernetes-dashboard(fa887d66-d751-4945-bdd3-79f83ba6a844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p" podUID="fa887d66-d751-4945-bdd3-79f83ba6a844"
	Nov 01 10:34:19 old-k8s-version-180313 kubelet[773]: I1101 10:34:19.639972     773 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 10:34:19 old-k8s-version-180313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:34:19 old-k8s-version-180313 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:34:19 old-k8s-version-180313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e7f9c82d186de380ac4c95709a6f4e841288f59b2f20cc353cb533bbe34ae795] <==
	2025/11/01 10:33:53 Using namespace: kubernetes-dashboard
	2025/11/01 10:33:53 Using in-cluster config to connect to apiserver
	2025/11/01 10:33:53 Using secret token for csrf signing
	2025/11/01 10:33:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:33:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:33:53 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 10:33:53 Generating JWE encryption key
	2025/11/01 10:33:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:33:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:33:53 Initializing JWE encryption key from synchronized object
	2025/11/01 10:33:53 Creating in-cluster Sidecar client
	2025/11/01 10:33:53 Serving insecurely on HTTP port: 9090
	2025/11/01 10:33:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:34:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:33:53 Starting overwatch
	
	
	==> storage-provisioner [b7f346f64193604fa373e321cc06889057058a09b32bd43aaeff438939dc1eca] <==
	I1101 10:34:02.845505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:34:02.868092       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:34:02.868193       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:34:20.272365       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:34:20.277590       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-180313_edc991ae-54be-4d4d-a709-9810096df9b0!
	I1101 10:34:20.281637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f6ac2f7-82d3-49b9-9a4c-13a56b4eb794", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-180313_edc991ae-54be-4d4d-a709-9810096df9b0 became leader
	I1101 10:34:20.378187       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-180313_edc991ae-54be-4d4d-a709-9810096df9b0!
	
	
	==> storage-provisioner [e34ebc504a578f431ae701279e46598b5704c72d1af12964f1662589246f169c] <==
	I1101 10:33:32.107825       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:34:02.111466       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-180313 -n old-k8s-version-180313
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-180313 -n old-k8s-version-180313: exit status 2 (544.66842ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-180313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-180313
helpers_test.go:243: (dbg) docker inspect old-k8s-version-180313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1",
	        "Created": "2025-11-01T10:31:56.175953746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 459930,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:33:17.401971457Z",
	            "FinishedAt": "2025-11-01T10:33:16.550600918Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/hosts",
	        "LogPath": "/var/lib/docker/containers/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1/d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1-json.log",
	        "Name": "/old-k8s-version-180313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-180313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-180313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d94f4283ef9254f51719e74494047deae983739ddbd48bf494882a4285c9adf1",
	                "LowerDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c491e4bf06ad22f4811e37f58c78acc65c00215daaa2ad231095c57712938d90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-180313",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-180313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-180313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-180313",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-180313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "914f7f251ae08ccd4e7afbde9a9cf923f7630c69927615c4d252d39f8cdb055a",
	            "SandboxKey": "/var/run/docker/netns/914f7f251ae0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-180313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:5b:61:b0:1e:9f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "166ca61202b04ec7e10cf51d0a2cefb4328ec9285bf6b5c3a38e12ab732f4c8c",
	                    "EndpointID": "c88ff27a75647acdbd29b10b1746e9e9d7cb153ea59b8ed62565df65db62e83d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-180313",
	                        "d94f4283ef92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180313 -n old-k8s-version-180313
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180313 -n old-k8s-version-180313: exit status 2 (463.142157ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-180313 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-180313 logs -n 25: (1.936046942s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-220636 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo containerd config dump                                                                                                                                                                                                  │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo crio config                                                                                                                                                                                                             │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ delete  │ -p cilium-220636                                                                                                                                                                                                                              │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:30 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p force-systemd-env-065424                                                                                                                                                                                                                   │ force-systemd-env-065424 │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p cert-options-082900 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ ssh     │ cert-options-082900 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ ssh     │ -p cert-options-082900 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p cert-options-082900                                                                                                                                                                                                                        │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │                     │
	│ stop    │ -p old-k8s-version-180313 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-180313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	│ image   │ old-k8s-version-180313 image list --format=json                                                                                                                                                                                               │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ pause   │ -p old-k8s-version-180313 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:34:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:34:11.136320  461914 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:34:11.136478  461914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:34:11.136482  461914 out.go:374] Setting ErrFile to fd 2...
	I1101 10:34:11.136486  461914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:34:11.136839  461914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:34:11.137298  461914 out.go:368] Setting JSON to false
	I1101 10:34:11.138473  461914 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8201,"bootTime":1761985051,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:34:11.138545  461914 start.go:143] virtualization:  
	I1101 10:34:11.142000  461914 out.go:179] * [cert-expiration-459318] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:34:11.145912  461914 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:34:11.145979  461914 notify.go:221] Checking for updates...
	I1101 10:34:11.148974  461914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:34:11.151943  461914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:34:11.154785  461914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:34:11.157983  461914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:34:11.161137  461914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:34:11.164575  461914 config.go:182] Loaded profile config "cert-expiration-459318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:34:11.165167  461914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:34:11.194287  461914 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:34:11.194400  461914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:34:11.268930  461914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:34:11.253057956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:34:11.269068  461914 docker.go:319] overlay module found
	I1101 10:34:11.272230  461914 out.go:179] * Using the docker driver based on existing profile
	I1101 10:34:11.275154  461914 start.go:309] selected driver: docker
	I1101 10:34:11.275165  461914 start.go:930] validating driver "docker" against &{Name:cert-expiration-459318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-459318 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:34:11.275271  461914 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:34:11.276036  461914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:34:11.340390  461914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:34:11.330626211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:34:11.340784  461914 cni.go:84] Creating CNI manager for ""
	I1101 10:34:11.340836  461914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:34:11.340885  461914 start.go:353] cluster config:
	{Name:cert-expiration-459318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-459318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1101 10:34:11.344159  461914 out.go:179] * Starting "cert-expiration-459318" primary control-plane node in "cert-expiration-459318" cluster
	I1101 10:34:11.347157  461914 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:34:11.350141  461914 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:34:11.353178  461914 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:34:11.353132  461914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:34:11.353253  461914 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:34:11.353262  461914 cache.go:59] Caching tarball of preloaded images
	I1101 10:34:11.353353  461914 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:34:11.353359  461914 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:34:11.353479  461914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/config.json ...
	I1101 10:34:11.373096  461914 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:34:11.373107  461914 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:34:11.373124  461914 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:34:11.373145  461914 start.go:360] acquireMachinesLock for cert-expiration-459318: {Name:mk96f545b8c3406a32675a71039ef54c1b79a501 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:34:11.373214  461914 start.go:364] duration metric: took 49.658µs to acquireMachinesLock for "cert-expiration-459318"
	I1101 10:34:11.373234  461914 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:34:11.373244  461914 fix.go:54] fixHost starting: 
	I1101 10:34:11.373517  461914 cli_runner.go:164] Run: docker container inspect cert-expiration-459318 --format={{.State.Status}}
	I1101 10:34:11.392766  461914 fix.go:112] recreateIfNeeded on cert-expiration-459318: state=Running err=<nil>
	W1101 10:34:11.392786  461914 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:34:11.395968  461914 out.go:252] * Updating the running docker "cert-expiration-459318" container ...
	I1101 10:34:11.395991  461914 machine.go:94] provisionDockerMachine start ...
	I1101 10:34:11.396085  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:11.418655  461914 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:11.419006  461914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1101 10:34:11.419013  461914 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:34:11.570629  461914 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-459318
	
	I1101 10:34:11.570644  461914 ubuntu.go:182] provisioning hostname "cert-expiration-459318"
	I1101 10:34:11.570720  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:11.592089  461914 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:11.592412  461914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1101 10:34:11.592421  461914 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-459318 && echo "cert-expiration-459318" | sudo tee /etc/hostname
	I1101 10:34:11.753055  461914 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-459318
	
	I1101 10:34:11.753143  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:11.775079  461914 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:11.775385  461914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1101 10:34:11.775400  461914 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-459318' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-459318/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-459318' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:34:11.934157  461914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:34:11.934172  461914 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:34:11.934193  461914 ubuntu.go:190] setting up certificates
	I1101 10:34:11.934210  461914 provision.go:84] configureAuth start
	I1101 10:34:11.934274  461914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-459318
	I1101 10:34:11.953015  461914 provision.go:143] copyHostCerts
	I1101 10:34:11.953085  461914 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:34:11.953099  461914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:34:11.953176  461914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:34:11.953296  461914 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:34:11.953300  461914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:34:11.953328  461914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:34:11.953390  461914 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:34:11.953393  461914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:34:11.953417  461914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:34:11.953469  461914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-459318 san=[127.0.0.1 192.168.85.2 cert-expiration-459318 localhost minikube]
	I1101 10:34:12.638859  461914 provision.go:177] copyRemoteCerts
	I1101 10:34:12.638912  461914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:34:12.638951  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:12.664410  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:12.775656  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:34:12.796838  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:34:12.817785  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:34:12.839101  461914 provision.go:87] duration metric: took 904.87974ms to configureAuth
	I1101 10:34:12.839119  461914 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:34:12.839312  461914 config.go:182] Loaded profile config "cert-expiration-459318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:34:12.839420  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:12.870016  461914 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:12.870339  461914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1101 10:34:12.870352  461914 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:34:18.240383  461914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:34:18.240396  461914 machine.go:97] duration metric: took 6.844398143s to provisionDockerMachine
	I1101 10:34:18.240405  461914 start.go:293] postStartSetup for "cert-expiration-459318" (driver="docker")
	I1101 10:34:18.240415  461914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:34:18.240471  461914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:34:18.240510  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:18.264766  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:18.377840  461914 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:34:18.381523  461914 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:34:18.381542  461914 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:34:18.381551  461914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:34:18.381605  461914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:34:18.381688  461914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:34:18.381826  461914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:34:18.389810  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:34:18.408463  461914 start.go:296] duration metric: took 168.042395ms for postStartSetup
	I1101 10:34:18.408534  461914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:34:18.408591  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:18.426329  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:18.535430  461914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:34:18.541123  461914 fix.go:56] duration metric: took 7.167877103s for fixHost
	I1101 10:34:18.541138  461914 start.go:83] releasing machines lock for "cert-expiration-459318", held for 7.167915808s
	I1101 10:34:18.541211  461914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-459318
	I1101 10:34:18.559110  461914 ssh_runner.go:195] Run: cat /version.json
	I1101 10:34:18.559153  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:18.559405  461914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:34:18.559461  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:18.588276  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:18.588276  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:18.853440  461914 ssh_runner.go:195] Run: systemctl --version
	I1101 10:34:18.863202  461914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:34:18.927232  461914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:34:18.938785  461914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:34:18.938847  461914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:34:18.950058  461914 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:34:18.950072  461914 start.go:496] detecting cgroup driver to use...
	I1101 10:34:18.950104  461914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:34:18.950156  461914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:34:18.974965  461914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:34:19.000828  461914 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:34:19.000893  461914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:34:19.018627  461914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:34:19.033341  461914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:34:19.250126  461914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:34:19.456757  461914 docker.go:234] disabling docker service ...
	I1101 10:34:19.456816  461914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:34:19.489169  461914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:34:19.504867  461914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:34:19.696935  461914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:34:19.886289  461914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:34:19.900078  461914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:34:19.918083  461914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:34:19.918139  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.930046  461914 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:34:19.930110  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.940860  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.951689  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.962197  461914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:34:19.972606  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.983227  461914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:19.992248  461914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:20.004082  461914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:34:20.017147  461914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:34:20.026269  461914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:34:20.196501  461914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:34:20.454266  461914 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:34:20.454329  461914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:34:20.462193  461914 start.go:564] Will wait 60s for crictl version
	I1101 10:34:20.462247  461914 ssh_runner.go:195] Run: which crictl
	I1101 10:34:20.467542  461914 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:34:20.507802  461914 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:34:20.507896  461914 ssh_runner.go:195] Run: crio --version
	I1101 10:34:20.545024  461914 ssh_runner.go:195] Run: crio --version
	I1101 10:34:20.579207  461914 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:34:20.582252  461914 cli_runner.go:164] Run: docker network inspect cert-expiration-459318 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:34:20.600011  461914 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:34:20.604341  461914 kubeadm.go:884] updating cluster {Name:cert-expiration-459318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-459318 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:34:20.604446  461914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:34:20.604501  461914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:34:20.640197  461914 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:34:20.640208  461914 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:34:20.640263  461914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:34:20.666147  461914 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:34:20.666160  461914 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:34:20.666167  461914 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:34:20.666295  461914 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-459318 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-459318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:34:20.666378  461914 ssh_runner.go:195] Run: crio config
	I1101 10:34:20.744653  461914 cni.go:84] Creating CNI manager for ""
	I1101 10:34:20.744664  461914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:34:20.744684  461914 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:34:20.744720  461914 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-459318 NodeName:cert-expiration-459318 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:34:20.744855  461914 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-459318"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:34:20.744930  461914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:34:20.756250  461914 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:34:20.756319  461914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:34:20.764709  461914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:34:20.778601  461914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:34:20.792489  461914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1101 10:34:20.806316  461914 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:34:20.810204  461914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:34:20.964210  461914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:34:20.979030  461914 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318 for IP: 192.168.85.2
	I1101 10:34:20.979060  461914 certs.go:195] generating shared ca certs ...
	I1101 10:34:20.979076  461914 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:20.979214  461914 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:34:20.979256  461914 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:34:20.979261  461914 certs.go:257] generating profile certs ...
	W1101 10:34:20.979373  461914 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1101 10:34:20.979594  461914 certs.go:624] cert expired /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/client.crt: expiration: 2025-11-01 10:33:43 +0000 UTC, now: 2025-11-01 10:34:20.979587449 +0000 UTC m=+9.889791633
	I1101 10:34:20.979709  461914 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/client.key
	I1101 10:34:20.979726  461914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/client.crt with IP's: []
	I1101 10:34:21.894720  461914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/client.crt ...
	I1101 10:34:21.894744  461914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/client.crt: {Name:mk4da3e45f843d998e9f8de8b38d2a40b00fe60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:21.894893  461914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/client.key ...
	I1101 10:34:21.894903  461914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/client.key: {Name:mke714c3fd61de6c853a252b8d16603c719e962c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1101 10:34:21.895063  461914 out.go:285] ! Certificate apiserver.crt.7a7775ab has expired. Generating a new one...
	I1101 10:34:21.895092  461914 certs.go:624] cert expired /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.crt.7a7775ab: expiration: 2025-11-01 10:33:43 +0000 UTC, now: 2025-11-01 10:34:21.895085554 +0000 UTC m=+10.805289746
	I1101 10:34:21.895178  461914 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.key.7a7775ab
	I1101 10:34:21.895192  461914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.crt.7a7775ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:34:22.029210  461914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.crt.7a7775ab ...
	I1101 10:34:22.029252  461914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.crt.7a7775ab: {Name:mk460ead55413b4f61b487e5103623ab71480442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:22.029403  461914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.key.7a7775ab ...
	I1101 10:34:22.029412  461914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.key.7a7775ab: {Name:mk22ffd89eb530e2875030c1561fcef077d1a462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:22.029470  461914 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.crt.7a7775ab -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.crt
	I1101 10:34:22.029603  461914 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.key.7a7775ab -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.key
	W1101 10:34:22.032813  461914 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1101 10:34:22.032836  461914 certs.go:624] cert expired /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/proxy-client.crt: expiration: 2025-11-01 10:33:44 +0000 UTC, now: 2025-11-01 10:34:22.032830754 +0000 UTC m=+10.943034947
	I1101 10:34:22.036890  461914 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/proxy-client.key
	I1101 10:34:22.036919  461914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/proxy-client.crt with IP's: []
	I1101 10:34:22.739547  461914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/proxy-client.crt ...
	I1101 10:34:22.739561  461914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/proxy-client.crt: {Name:mkd8b6026c90b0542409bf4a53c315ff4208839c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:22.739747  461914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/proxy-client.key ...
	I1101 10:34:22.739758  461914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/proxy-client.key: {Name:mk16dfe7aced14e9794edf66be755d929ccb85eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:22.740018  461914 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:34:22.740059  461914 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:34:22.740067  461914 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:34:22.740110  461914 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:34:22.740134  461914 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:34:22.740155  461914 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:34:22.740215  461914 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:34:22.740916  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:34:22.806277  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:34:22.847956  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:34:22.876521  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:34:22.923290  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:34:22.971031  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:34:23.026978  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:34:23.078922  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/cert-expiration-459318/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:34:23.118986  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:34:23.144491  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:34:23.178086  461914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:34:23.220774  461914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:34:23.265387  461914 ssh_runner.go:195] Run: openssl version
	I1101 10:34:23.289286  461914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:34:23.307775  461914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:34:23.312253  461914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:34:23.312319  461914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:34:23.388943  461914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:34:23.405599  461914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:34:23.432595  461914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:34:23.439894  461914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:34:23.439962  461914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:34:23.527118  461914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:34:23.543746  461914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:34:23.552527  461914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:34:23.557101  461914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:34:23.557165  461914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:34:23.638247  461914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:34:23.649188  461914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:34:23.665199  461914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:34:23.760802  461914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:34:23.857590  461914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:34:23.941501  461914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:34:24.028379  461914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:34:24.113543  461914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:34:24.203363  461914 kubeadm.go:401] StartCluster: {Name:cert-expiration-459318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-459318 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:34:24.203445  461914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:34:24.203511  461914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:34:24.271102  461914 cri.go:89] found id: "f7256a9701e4ca2ec942defde646e6c4bcb9a7f565f1a1debd15c4241b8cbe68"
	I1101 10:34:24.271112  461914 cri.go:89] found id: "8eda3739387e28649e29e459a4b37e6d47a8e62a2b3f6cd7e0a6f5e3c64c6556"
	I1101 10:34:24.271115  461914 cri.go:89] found id: "736a76302a6c31833ddc1c12dc329eead76c9f7b832f34931289b37e2b8b24ff"
	I1101 10:34:24.271118  461914 cri.go:89] found id: "47439d8390fece7945c2abf7028b24a8954f6b724abe8bf29551d47b9440ad8d"
	I1101 10:34:24.271122  461914 cri.go:89] found id: "37fdf16cb68e47d663ee22d4ce30e0f2cfabc16cd43eb333ee858ce14f9cb77c"
	I1101 10:34:24.271125  461914 cri.go:89] found id: "9c917f3bed62a0c1ec50e058c61760e8d30858a40dd8020721d3bd6be2e4a578"
	I1101 10:34:24.271127  461914 cri.go:89] found id: "72daf7e2588dfebb67654fd88fcb7a250fc6b8085ee2fa2b8bdcc921fe7033b4"
	I1101 10:34:24.271129  461914 cri.go:89] found id: "0c063a65b4d5e324b31c3a9b10c39d09d5d784dcf5809093507d339ec3a3852d"
	I1101 10:34:24.271131  461914 cri.go:89] found id: "dca5473a3e052cfc935e96777baff2bfcc25520350623531979037682cf814fb"
	I1101 10:34:24.271138  461914 cri.go:89] found id: "d79c2e8e2f3345188ca4b42169f1a4d029c38f1383bacff735b4ff12f9e9275e"
	I1101 10:34:24.271140  461914 cri.go:89] found id: "1e1fa5a7eaf0a46fff8b35649b694d0165034909d54b62d54dbd7153873f57f0"
	I1101 10:34:24.271142  461914 cri.go:89] found id: "3a475c3de493af6bffc7e7d7de9b7c22c077f75fb8ae2868401030fbfb052cc7"
	I1101 10:34:24.271144  461914 cri.go:89] found id: "39bd8f99ca7556b661d1bdb57d6397c614132c7933afb9b49cf210e8358cc638"
	I1101 10:34:24.271147  461914 cri.go:89] found id: "547d9eda0115b91a910ef2426ced1f07d23a6707699af4a3714ecc6260d6a20c"
	I1101 10:34:24.271149  461914 cri.go:89] found id: "88fa609d19891248c1de38fe0c1c7de0c9f9b44248d65126f46527482e95f049"
	I1101 10:34:24.271153  461914 cri.go:89] found id: "03fccc3cce6e05075b94b1a16116515ddaa893c9a2a03cd10f4c9248efbfe895"
	I1101 10:34:24.271155  461914 cri.go:89] found id: ""
	I1101 10:34:24.271207  461914 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:34:24.292058  461914 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:34:24Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:34:24.292127  461914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:34:24.305405  461914 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:34:24.305414  461914 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:34:24.305465  461914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:34:24.327891  461914 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:34:24.328567  461914 kubeconfig.go:125] found "cert-expiration-459318" server: "https://192.168.85.2:8443"
	I1101 10:34:24.330557  461914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:34:24.348383  461914 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:34:24.348406  461914 kubeadm.go:602] duration metric: took 42.986997ms to restartPrimaryControlPlane
	I1101 10:34:24.348414  461914 kubeadm.go:403] duration metric: took 145.061696ms to StartCluster
	I1101 10:34:24.348431  461914 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:24.348498  461914 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:34:24.349465  461914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:24.349678  461914 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:34:24.350089  461914 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:34:24.350157  461914 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-459318"
	I1101 10:34:24.350169  461914 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-459318"
	W1101 10:34:24.350174  461914 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:34:24.350196  461914 host.go:66] Checking if "cert-expiration-459318" exists ...
	I1101 10:34:24.350644  461914 cli_runner.go:164] Run: docker container inspect cert-expiration-459318 --format={{.State.Status}}
	I1101 10:34:24.351044  461914 config.go:182] Loaded profile config "cert-expiration-459318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:34:24.351098  461914 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-459318"
	I1101 10:34:24.351118  461914 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-459318"
	I1101 10:34:24.351387  461914 cli_runner.go:164] Run: docker container inspect cert-expiration-459318 --format={{.State.Status}}
	I1101 10:34:24.355746  461914 out.go:179] * Verifying Kubernetes components...
	I1101 10:34:24.360689  461914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:34:24.399567  461914 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-459318"
	W1101 10:34:24.399576  461914 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:34:24.399600  461914 host.go:66] Checking if "cert-expiration-459318" exists ...
	I1101 10:34:24.400012  461914 cli_runner.go:164] Run: docker container inspect cert-expiration-459318 --format={{.State.Status}}
	I1101 10:34:24.423229  461914 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:34:24.423241  461914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:34:24.423303  461914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-459318
	I1101 10:34:24.452457  461914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/cert-expiration-459318/id_rsa Username:docker}
	I1101 10:34:24.453253  461914 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.468198074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.474700282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.475552239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.49220443Z" level=info msg="Created container c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p/dashboard-metrics-scraper" id=83d86006-5be3-42fa-9d9e-40e41d65f64b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.494548721Z" level=info msg="Starting container: c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72" id=6eecc742-dbde-4150-99d2-c2f720cedc3a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.496419414Z" level=info msg="Started container" PID=1630 containerID=c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p/dashboard-metrics-scraper id=6eecc742-dbde-4150-99d2-c2f720cedc3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=0105e472aab020673e89f44c64000129856f537e0c25cd458a68d4cb34fb2724
	Nov 01 10:34:05 old-k8s-version-180313 conmon[1628]: conmon c70e29e0a4b7c8c90f84 <ninfo>: container 1630 exited with status 1
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.795618888Z" level=info msg="Removing container: 1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974" id=38074d01-fc49-4068-a261-e9a0e87b1088 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.806165642Z" level=info msg="Error loading conmon cgroup of container 1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974: cgroup deleted" id=38074d01-fc49-4068-a261-e9a0e87b1088 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:34:05 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:05.809204342Z" level=info msg="Removed container 1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p/dashboard-metrics-scraper" id=38074d01-fc49-4068-a261-e9a0e87b1088 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.422807127Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.438034594Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.438200619Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.438276632Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.45169986Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.451915543Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.452022219Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.462890314Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.463055067Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.463134059Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.47228617Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.472452654Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.472536421Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.48606027Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:34:12 old-k8s-version-180313 crio[648]: time="2025-11-01T10:34:12.48627345Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c70e29e0a4b7c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   0105e472aab02       dashboard-metrics-scraper-5f989dc9cf-d8s5p       kubernetes-dashboard
	b7f346f641936       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   d153843c1b6a2       storage-provisioner                              kube-system
	e7f9c82d186de       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   426ea044e54ae       kubernetes-dashboard-8694d4445c-wt2nm            kubernetes-dashboard
	ddeeb83c62030       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   add87bc476aed       coredns-5dd5756b68-ltprk                         kube-system
	7cc4abc01044d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   5ac3561d0e5f0       busybox                                          default
	0ce288ea22101       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago       Running             kube-proxy                  1                   a52280133e87e       kube-proxy-ltbrb                                 kube-system
	e34ebc504a578       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   d153843c1b6a2       storage-provisioner                              kube-system
	637fc39de58e8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   3992fcfe5f9fa       kindnet-2qdl9                                    kube-system
	bbb2ffd94dc53       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   fd7bd299d2d69       kube-controller-manager-old-k8s-version-180313   kube-system
	ee76c6ed75d1e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   8eb6c8fc170e6       etcd-old-k8s-version-180313                      kube-system
	527c1aae77a9e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   4ede815b41d7b       kube-apiserver-old-k8s-version-180313            kube-system
	4e0b8f9a18f71       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   f50e7705efe83       kube-scheduler-old-k8s-version-180313            kube-system
	
	
	==> coredns [ddeeb83c620307d11f123bac2aa9499fb43e2a9c7406a2c998952da43aad6bfa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39958 - 47129 "HINFO IN 8504101705225528951.6274424222166963424. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021155622s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-180313
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-180313
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=old-k8s-version-180313
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_32_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:32:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-180313
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:34:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:34:01 +0000   Sat, 01 Nov 2025 10:32:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:34:01 +0000   Sat, 01 Nov 2025 10:32:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:34:01 +0000   Sat, 01 Nov 2025 10:32:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:34:01 +0000   Sat, 01 Nov 2025 10:32:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-180313
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                315a9da4-3be7-492f-b967-608664aed87a
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-ltprk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-old-k8s-version-180313                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-2qdl9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-old-k8s-version-180313             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-180313    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-ltbrb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-180313             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-d8s5p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-wt2nm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-180313 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-180313 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s                   node-controller  Node old-k8s-version-180313 event: Registered Node old-k8s-version-180313 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-180313 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node old-k8s-version-180313 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node old-k8s-version-180313 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-180313 event: Registered Node old-k8s-version-180313 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:06] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ee76c6ed75d1e26dfa7a963bf48a1d032962e8b362b818a13d5814aefecdc9df] <==
	{"level":"info","ts":"2025-11-01T10:33:25.218168Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:33:25.218202Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:33:25.218439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-01T10:33:25.218533Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-01T10:33:25.218674Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:33:25.218747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:33:25.232527Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-01T10:33:25.233128Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-01T10:33:25.232863Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:33:25.23336Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:33:25.23342Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:33:26.704507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T10:33:26.704628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:33:26.704684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-01T10:33:26.704723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T10:33:26.704754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-01T10:33:26.704791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T10:33:26.704824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-01T10:33:26.708463Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-180313 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:33:26.708647Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:33:26.711865Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:33:26.711994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:33:26.715516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-01T10:33:26.719952Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:33:26.737776Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:34:27 up  2:16,  0 user,  load average: 3.41, 3.61, 2.86
	Linux old-k8s-version-180313 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [637fc39de58e802e58e0a2de44f07dad9e0d27382ddb64ee5b22a8c3b6a4584a] <==
	I1101 10:33:32.218546       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:33:32.218767       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:33:32.218898       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:33:32.218915       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:33:32.218928       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:33:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:33:32.419471       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:33:32.419488       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:33:32.419497       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:33:32.419626       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:34:02.419888       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 10:34:02.419889       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:34:02.420126       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:34:02.420229       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 10:34:03.620077       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:34:03.620210       1 metrics.go:72] Registering metrics
	I1101 10:34:03.620340       1 controller.go:711] "Syncing nftables rules"
	I1101 10:34:12.421757       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:34:12.421814       1 main.go:301] handling current node
	I1101 10:34:22.426184       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:34:22.426223       1 main.go:301] handling current node
	
	
	==> kube-apiserver [527c1aae77a9ef7d7753fee214b43dddb0b3ba83158c2de982968514735a6e82] <==
	I1101 10:33:30.615322       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1101 10:33:30.644283       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 10:33:30.644360       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:33:30.644366       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:33:30.644372       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:33:30.690669       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:33:30.716234       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 10:33:30.717477       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 10:33:30.718362       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:33:30.730901       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:33:30.733193       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:33:30.739199       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1101 10:33:30.741548       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:33:30.744959       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:33:31.581205       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:33:32.412114       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:33:32.455482       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:33:32.482727       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:33:32.495444       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:33:32.505390       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:33:32.560280       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.236.181"}
	I1101 10:33:32.576980       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.166.96"}
	I1101 10:33:43.479432       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 10:33:43.488634       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:33:43.639201       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [bbb2ffd94dc5362517e75879e833273c6d849a640ba961071b27a88cf786f508] <==
	I1101 10:33:43.580261       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-d8s5p"
	I1101 10:33:43.581125       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-wt2nm"
	I1101 10:33:43.581804       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1101 10:33:43.583961       1 shared_informer.go:318] Caches are synced for endpoint
	I1101 10:33:43.597264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="107.244203ms"
	I1101 10:33:43.610915       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1101 10:33:43.614621       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:33:43.624735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="126.82163ms"
	I1101 10:33:43.645012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.129812ms"
	I1101 10:33:43.646040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.097µs"
	I1101 10:33:43.651226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.906314ms"
	I1101 10:33:43.651406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.933µs"
	I1101 10:33:43.665317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.629µs"
	I1101 10:33:43.994562       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:33:44.014933       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:33:44.014992       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:33:48.751417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.595µs"
	I1101 10:33:49.771097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.069µs"
	I1101 10:33:50.772273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.492µs"
	I1101 10:33:53.789462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.242073ms"
	I1101 10:33:53.790295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="37.531µs"
	I1101 10:34:04.655557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.300223ms"
	I1101 10:34:04.655661       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.507µs"
	I1101 10:34:05.815521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.475µs"
	I1101 10:34:13.904715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.212µs"
	
	
	==> kube-proxy [0ce288ea2210149663378329c5a02b5fd6174c052665e287644f3a46a6df08f7] <==
	I1101 10:33:32.190348       1 server_others.go:69] "Using iptables proxy"
	I1101 10:33:32.256896       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1101 10:33:32.282639       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:33:32.284855       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:33:32.285064       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:33:32.285103       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:33:32.285163       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:33:32.285441       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:33:32.285954       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:32.286835       1 config.go:188] "Starting service config controller"
	I1101 10:33:32.286934       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:33:32.286987       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:33:32.287021       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:33:32.287652       1 config.go:315] "Starting node config controller"
	I1101 10:33:32.287738       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:33:32.387593       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 10:33:32.387599       1 shared_informer.go:318] Caches are synced for service config
	I1101 10:33:32.387847       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4e0b8f9a18f71411eace0341504ba546aebc0d91bdd8bc805e54ead023a3c60c] <==
	I1101 10:33:27.737415       1 serving.go:348] Generated self-signed cert in-memory
	W1101 10:33:30.556326       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:33:30.556424       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:33:30.556458       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:33:30.556518       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:33:30.666230       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 10:33:30.666268       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:30.670105       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 10:33:30.670200       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 10:33:30.670772       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:30.670835       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W1101 10:33:30.702465       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1101 10:33:30.702584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1101 10:33:30.702736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1101 10:33:30.702805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1101 10:33:30.702950       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1101 10:33:30.702988       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1101 10:33:30.703293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1101 10:33:30.703366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1101 10:33:30.703882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1101 10:33:30.703958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1101 10:33:30.704066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1101 10:33:30.704647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	I1101 10:33:30.777819       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:33:43 old-k8s-version-180313 kubelet[773]: I1101 10:33:43.616530     773 topology_manager.go:215] "Topology Admit Handler" podUID="954439ef-73b3-44b2-bf87-2f7761a1c85b" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-wt2nm"
	Nov 01 10:33:43 old-k8s-version-180313 kubelet[773]: I1101 10:33:43.684001     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fa887d66-d751-4945-bdd3-79f83ba6a844-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-d8s5p\" (UID: \"fa887d66-d751-4945-bdd3-79f83ba6a844\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p"
	Nov 01 10:33:43 old-k8s-version-180313 kubelet[773]: I1101 10:33:43.684100     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/954439ef-73b3-44b2-bf87-2f7761a1c85b-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-wt2nm\" (UID: \"954439ef-73b3-44b2-bf87-2f7761a1c85b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wt2nm"
	Nov 01 10:33:43 old-k8s-version-180313 kubelet[773]: I1101 10:33:43.684140     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dr6w\" (UniqueName: \"kubernetes.io/projected/fa887d66-d751-4945-bdd3-79f83ba6a844-kube-api-access-6dr6w\") pod \"dashboard-metrics-scraper-5f989dc9cf-d8s5p\" (UID: \"fa887d66-d751-4945-bdd3-79f83ba6a844\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p"
	Nov 01 10:33:43 old-k8s-version-180313 kubelet[773]: I1101 10:33:43.684189     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4m8\" (UniqueName: \"kubernetes.io/projected/954439ef-73b3-44b2-bf87-2f7761a1c85b-kube-api-access-5m4m8\") pod \"kubernetes-dashboard-8694d4445c-wt2nm\" (UID: \"954439ef-73b3-44b2-bf87-2f7761a1c85b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wt2nm"
	Nov 01 10:33:48 old-k8s-version-180313 kubelet[773]: I1101 10:33:48.735513     773 scope.go:117] "RemoveContainer" containerID="e5345d093f08b8a84ddc0f861202ae64b8befe001fa68e5f79d690524c5b4794"
	Nov 01 10:33:49 old-k8s-version-180313 kubelet[773]: I1101 10:33:49.743436     773 scope.go:117] "RemoveContainer" containerID="e5345d093f08b8a84ddc0f861202ae64b8befe001fa68e5f79d690524c5b4794"
	Nov 01 10:33:49 old-k8s-version-180313 kubelet[773]: I1101 10:33:49.743741     773 scope.go:117] "RemoveContainer" containerID="1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974"
	Nov 01 10:33:49 old-k8s-version-180313 kubelet[773]: E1101 10:33:49.744014     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8s5p_kubernetes-dashboard(fa887d66-d751-4945-bdd3-79f83ba6a844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p" podUID="fa887d66-d751-4945-bdd3-79f83ba6a844"
	Nov 01 10:33:50 old-k8s-version-180313 kubelet[773]: I1101 10:33:50.750897     773 scope.go:117] "RemoveContainer" containerID="1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974"
	Nov 01 10:33:50 old-k8s-version-180313 kubelet[773]: E1101 10:33:50.751165     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8s5p_kubernetes-dashboard(fa887d66-d751-4945-bdd3-79f83ba6a844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p" podUID="fa887d66-d751-4945-bdd3-79f83ba6a844"
	Nov 01 10:33:53 old-k8s-version-180313 kubelet[773]: I1101 10:33:53.889339     773 scope.go:117] "RemoveContainer" containerID="1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974"
	Nov 01 10:33:53 old-k8s-version-180313 kubelet[773]: E1101 10:33:53.889738     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8s5p_kubernetes-dashboard(fa887d66-d751-4945-bdd3-79f83ba6a844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p" podUID="fa887d66-d751-4945-bdd3-79f83ba6a844"
	Nov 01 10:34:02 old-k8s-version-180313 kubelet[773]: I1101 10:34:02.782401     773 scope.go:117] "RemoveContainer" containerID="e34ebc504a578f431ae701279e46598b5704c72d1af12964f1662589246f169c"
	Nov 01 10:34:02 old-k8s-version-180313 kubelet[773]: I1101 10:34:02.809127     773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wt2nm" podStartSLOduration=10.64563652 podCreationTimestamp="2025-11-01 10:33:43 +0000 UTC" firstStartedPulling="2025-11-01 10:33:43.942448538 +0000 UTC m=+19.618720369" lastFinishedPulling="2025-11-01 10:33:53.105378355 +0000 UTC m=+28.781650185" observedRunningTime="2025-11-01 10:33:53.775278631 +0000 UTC m=+29.451550470" watchObservedRunningTime="2025-11-01 10:34:02.808566336 +0000 UTC m=+38.484838175"
	Nov 01 10:34:05 old-k8s-version-180313 kubelet[773]: I1101 10:34:05.465467     773 scope.go:117] "RemoveContainer" containerID="1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974"
	Nov 01 10:34:05 old-k8s-version-180313 kubelet[773]: I1101 10:34:05.793366     773 scope.go:117] "RemoveContainer" containerID="1690abbcc7e5f820a1d7fe60002dbc4ece72b630f1c6a1808af5c5afc4e32974"
	Nov 01 10:34:05 old-k8s-version-180313 kubelet[773]: I1101 10:34:05.793756     773 scope.go:117] "RemoveContainer" containerID="c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72"
	Nov 01 10:34:05 old-k8s-version-180313 kubelet[773]: E1101 10:34:05.794075     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8s5p_kubernetes-dashboard(fa887d66-d751-4945-bdd3-79f83ba6a844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p" podUID="fa887d66-d751-4945-bdd3-79f83ba6a844"
	Nov 01 10:34:13 old-k8s-version-180313 kubelet[773]: I1101 10:34:13.889165     773 scope.go:117] "RemoveContainer" containerID="c70e29e0a4b7c8c90f84a5c212a0236aa535bcc7a7c0adbe8eee8a93c409cd72"
	Nov 01 10:34:13 old-k8s-version-180313 kubelet[773]: E1101 10:34:13.890263     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8s5p_kubernetes-dashboard(fa887d66-d751-4945-bdd3-79f83ba6a844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8s5p" podUID="fa887d66-d751-4945-bdd3-79f83ba6a844"
	Nov 01 10:34:19 old-k8s-version-180313 kubelet[773]: I1101 10:34:19.639972     773 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 10:34:19 old-k8s-version-180313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:34:19 old-k8s-version-180313 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:34:19 old-k8s-version-180313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e7f9c82d186de380ac4c95709a6f4e841288f59b2f20cc353cb533bbe34ae795] <==
	2025/11/01 10:33:53 Using namespace: kubernetes-dashboard
	2025/11/01 10:33:53 Using in-cluster config to connect to apiserver
	2025/11/01 10:33:53 Using secret token for csrf signing
	2025/11/01 10:33:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:33:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:33:53 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 10:33:53 Generating JWE encryption key
	2025/11/01 10:33:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:33:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:33:53 Initializing JWE encryption key from synchronized object
	2025/11/01 10:33:53 Creating in-cluster Sidecar client
	2025/11/01 10:33:53 Serving insecurely on HTTP port: 9090
	2025/11/01 10:33:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:34:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:33:53 Starting overwatch
	
	
	==> storage-provisioner [b7f346f64193604fa373e321cc06889057058a09b32bd43aaeff438939dc1eca] <==
	I1101 10:34:02.845505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:34:02.868092       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:34:02.868193       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:34:20.272365       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:34:20.277590       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-180313_edc991ae-54be-4d4d-a709-9810096df9b0!
	I1101 10:34:20.281637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f6ac2f7-82d3-49b9-9a4c-13a56b4eb794", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-180313_edc991ae-54be-4d4d-a709-9810096df9b0 became leader
	I1101 10:34:20.378187       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-180313_edc991ae-54be-4d4d-a709-9810096df9b0!
	
	
	==> storage-provisioner [e34ebc504a578f431ae701279e46598b5704c72d1af12964f1662589246f169c] <==
	I1101 10:33:32.107825       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:34:02.111466       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-180313 -n old-k8s-version-180313
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-180313 -n old-k8s-version-180313: exit status 2 (603.929069ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-180313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (9.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (309.040482ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:35:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-170467 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-170467 describe deploy/metrics-server -n kube-system: exit status 1 (92.549867ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-170467 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-170467
helpers_test.go:243: (dbg) docker inspect no-preload-170467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174",
	        "Created": "2025-11-01T10:34:34.605945811Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 464754,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:34:34.805554157Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/hostname",
	        "HostsPath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/hosts",
	        "LogPath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174-json.log",
	        "Name": "/no-preload-170467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-170467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-170467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174",
	                "LowerDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-170467",
	                "Source": "/var/lib/docker/volumes/no-preload-170467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-170467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-170467",
	                "name.minikube.sigs.k8s.io": "no-preload-170467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e67ed722cd2c1408dd49a76322dd46b05b2bfefb84f57645ebd692b73ca9e9e",
	            "SandboxKey": "/var/run/docker/netns/8e67ed722cd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-170467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:8a:3e:5d:6d:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a76db7f9c768e30abf0f10f25f36c5fa2518f946ae0f8436a94ea13f0365a6d0",
	                    "EndpointID": "4ab2d799cd7d7b040abffda0652e6b2db783e83a604000f844e95d3a7df1b711",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-170467",
	                        "496a258eae10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-170467 -n no-preload-170467
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-170467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-170467 logs -n 25: (1.242969541s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-220636 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ ssh     │ -p cilium-220636 sudo crio config                                                                                                                                                                                                             │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │                     │
	│ delete  │ -p cilium-220636                                                                                                                                                                                                                              │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:30 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p force-systemd-env-065424                                                                                                                                                                                                                   │ force-systemd-env-065424 │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p cert-options-082900 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ ssh     │ cert-options-082900 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ ssh     │ -p cert-options-082900 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p cert-options-082900                                                                                                                                                                                                                        │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │                     │
	│ stop    │ -p old-k8s-version-180313 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-180313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ image   │ old-k8s-version-180313 image list --format=json                                                                                                                                                                                               │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ pause   │ -p old-k8s-version-180313 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467        │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:35 UTC │
	│ delete  │ -p cert-expiration-459318                                                                                                                                                                                                                     │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-170467        │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:34:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:34:38.127025  465703 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:34:38.127157  465703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:34:38.127169  465703 out.go:374] Setting ErrFile to fd 2...
	I1101 10:34:38.127174  465703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:34:38.127440  465703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:34:38.127868  465703 out.go:368] Setting JSON to false
	I1101 10:34:38.128729  465703 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8228,"bootTime":1761985051,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:34:38.128800  465703 start.go:143] virtualization:  
	I1101 10:34:38.131885  465703 out.go:179] * [embed-certs-618070] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:34:38.136032  465703 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:34:38.136168  465703 notify.go:221] Checking for updates...
	I1101 10:34:38.142011  465703 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:34:38.145031  465703 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:34:38.148046  465703 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:34:38.150993  465703 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:34:38.153853  465703 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:34:38.157169  465703 config.go:182] Loaded profile config "no-preload-170467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:34:38.157282  465703 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:34:38.192392  465703 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:34:38.192517  465703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:34:38.252584  465703 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-01 10:34:38.243317804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:34:38.252695  465703 docker.go:319] overlay module found
	I1101 10:34:38.255909  465703 out.go:179] * Using the docker driver based on user configuration
	I1101 10:34:38.258845  465703 start.go:309] selected driver: docker
	I1101 10:34:38.258870  465703 start.go:930] validating driver "docker" against <nil>
	I1101 10:34:38.258884  465703 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:34:38.259642  465703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:34:38.316137  465703 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-01 10:34:38.306802648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:34:38.316303  465703 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:34:38.316532  465703 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:34:38.319446  465703 out.go:179] * Using Docker driver with root privileges
	I1101 10:34:38.322207  465703 cni.go:84] Creating CNI manager for ""
	I1101 10:34:38.322277  465703 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:34:38.322291  465703 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:34:38.322383  465703 start.go:353] cluster config:
	{Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:34:38.325498  465703 out.go:179] * Starting "embed-certs-618070" primary control-plane node in "embed-certs-618070" cluster
	I1101 10:34:38.328255  465703 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:34:38.331291  465703 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:34:38.334165  465703 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:34:38.334226  465703 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:34:38.334244  465703 cache.go:59] Caching tarball of preloaded images
	I1101 10:34:38.334275  465703 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:34:38.334356  465703 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:34:38.334367  465703 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:34:38.334476  465703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/config.json ...
	I1101 10:34:38.334501  465703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/config.json: {Name:mk7f276ca8cba83a1f0dfe552fc03644e834a8fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:38.355554  465703 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:34:38.355583  465703 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:34:38.355601  465703 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:34:38.355624  465703 start.go:360] acquireMachinesLock for embed-certs-618070: {Name:mk13307b6a73c01f486aea48ffd4761ad677dd7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:34:38.355741  465703 start.go:364] duration metric: took 98.857µs to acquireMachinesLock for "embed-certs-618070"
	I1101 10:34:38.355772  465703 start.go:93] Provisioning new machine with config: &{Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:34:38.355840  465703 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:34:39.357802  464341 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-170467
	
	I1101 10:34:39.357825  464341 ubuntu.go:182] provisioning hostname "no-preload-170467"
	I1101 10:34:39.357897  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:34:39.381836  464341 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:39.382141  464341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33420 <nil> <nil>}
	I1101 10:34:39.382153  464341 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-170467 && echo "no-preload-170467" | sudo tee /etc/hostname
	I1101 10:34:39.549737  464341 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-170467
	
	I1101 10:34:39.549897  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:34:39.572751  464341 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:39.573051  464341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33420 <nil> <nil>}
	I1101 10:34:39.573068  464341 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-170467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-170467/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-170467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:34:39.734856  464341 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:34:39.734934  464341 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:34:39.734972  464341 ubuntu.go:190] setting up certificates
	I1101 10:34:39.735014  464341 provision.go:84] configureAuth start
	I1101 10:34:39.735131  464341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-170467
	I1101 10:34:39.759242  464341 provision.go:143] copyHostCerts
	I1101 10:34:39.759307  464341 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:34:39.759316  464341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:34:39.759385  464341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:34:39.759468  464341 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:34:39.759473  464341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:34:39.759500  464341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:34:39.759557  464341 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:34:39.759561  464341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:34:39.759587  464341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:34:39.759633  464341 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.no-preload-170467 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-170467]
	I1101 10:34:40.074014  464341 provision.go:177] copyRemoteCerts
	I1101 10:34:40.074143  464341 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:34:40.074234  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:34:40.093537  464341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:34:40.203638  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:34:40.227086  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:34:40.251266  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:34:40.273417  464341 provision.go:87] duration metric: took 538.361853ms to configureAuth
	I1101 10:34:40.273495  464341 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:34:40.273714  464341 config.go:182] Loaded profile config "no-preload-170467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:34:40.273880  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:34:40.291427  464341 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:40.291728  464341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33420 <nil> <nil>}
	I1101 10:34:40.291744  464341 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:34:40.630055  464341 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:34:40.630080  464341 machine.go:97] duration metric: took 4.481106698s to provisionDockerMachine
	I1101 10:34:40.630091  464341 client.go:176] duration metric: took 7.655927567s to LocalClient.Create
	I1101 10:34:40.630106  464341 start.go:167] duration metric: took 7.656126668s to libmachine.API.Create "no-preload-170467"
	I1101 10:34:40.630115  464341 start.go:293] postStartSetup for "no-preload-170467" (driver="docker")
	I1101 10:34:40.630127  464341 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:34:40.630217  464341 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:34:40.630271  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:34:40.654945  464341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:34:40.763014  464341 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:34:40.767932  464341 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:34:40.767967  464341 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:34:40.767979  464341 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:34:40.768034  464341 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:34:40.768121  464341 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:34:40.768240  464341 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:34:40.777076  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:34:40.798994  464341 start.go:296] duration metric: took 168.862189ms for postStartSetup
	I1101 10:34:40.799423  464341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-170467
	I1101 10:34:40.821058  464341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/config.json ...
	I1101 10:34:40.821353  464341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:34:40.821427  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:34:40.843957  464341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:34:40.955557  464341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:34:40.961121  464341 start.go:128] duration metric: took 8.018487599s to createHost
	I1101 10:34:40.961143  464341 start.go:83] releasing machines lock for "no-preload-170467", held for 8.018614536s
	I1101 10:34:40.961224  464341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-170467
	I1101 10:34:40.985942  464341 ssh_runner.go:195] Run: cat /version.json
	I1101 10:34:40.985994  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:34:40.986213  464341 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:34:40.986279  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:34:41.022760  464341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:34:41.030253  464341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:34:41.231048  464341 ssh_runner.go:195] Run: systemctl --version
	I1101 10:34:41.238459  464341 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:34:41.283798  464341 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:34:41.288702  464341 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:34:41.288822  464341 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:34:41.335861  464341 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:34:41.335934  464341 start.go:496] detecting cgroup driver to use...
	I1101 10:34:41.335981  464341 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:34:41.336070  464341 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:34:41.370779  464341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:34:41.394702  464341 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:34:41.394882  464341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:34:41.437129  464341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:34:41.489028  464341 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:34:41.730306  464341 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:34:41.933548  464341 docker.go:234] disabling docker service ...
	I1101 10:34:41.933672  464341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:34:41.969237  464341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:34:41.995749  464341 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:34:42.180396  464341 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:34:42.321269  464341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:34:42.336018  464341 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:34:42.352087  464341 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:34:42.352198  464341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:42.362319  464341 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:34:42.362472  464341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:42.372363  464341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:42.384613  464341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:42.395515  464341 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:34:42.404718  464341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:42.418429  464341 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:42.433495  464341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:42.443763  464341 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:34:42.451839  464341 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:34:42.459830  464341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:34:38.359288  465703 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:34:38.359523  465703 start.go:159] libmachine.API.Create for "embed-certs-618070" (driver="docker")
	I1101 10:34:38.359569  465703 client.go:173] LocalClient.Create starting
	I1101 10:34:38.359651  465703 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem
	I1101 10:34:38.359689  465703 main.go:143] libmachine: Decoding PEM data...
	I1101 10:34:38.359706  465703 main.go:143] libmachine: Parsing certificate...
	I1101 10:34:38.359766  465703 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem
	I1101 10:34:38.359796  465703 main.go:143] libmachine: Decoding PEM data...
	I1101 10:34:38.359810  465703 main.go:143] libmachine: Parsing certificate...
	I1101 10:34:38.360202  465703 cli_runner.go:164] Run: docker network inspect embed-certs-618070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:34:38.376739  465703 cli_runner.go:211] docker network inspect embed-certs-618070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:34:38.376839  465703 network_create.go:284] running [docker network inspect embed-certs-618070] to gather additional debugging logs...
	I1101 10:34:38.376862  465703 cli_runner.go:164] Run: docker network inspect embed-certs-618070
	W1101 10:34:38.394070  465703 cli_runner.go:211] docker network inspect embed-certs-618070 returned with exit code 1
	I1101 10:34:38.394105  465703 network_create.go:287] error running [docker network inspect embed-certs-618070]: docker network inspect embed-certs-618070: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-618070 not found
	I1101 10:34:38.394120  465703 network_create.go:289] output of [docker network inspect embed-certs-618070]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-618070 not found
	
	** /stderr **
	I1101 10:34:38.394235  465703 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:34:38.410153  465703 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b4026c1b0063 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ce:bd:30:c3:d1} reservation:<nil>}
	I1101 10:34:38.410531  465703 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e394bead07b9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:98:c6:36:ba:b7} reservation:<nil>}
	I1101 10:34:38.410795  465703 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bd8719a80444 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:75:48:52:a5:ee} reservation:<nil>}
	I1101 10:34:38.411146  465703 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a76db7f9c768 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:45:80:9f:47:77} reservation:<nil>}
	I1101 10:34:38.411585  465703 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a43d80}
	I1101 10:34:38.411606  465703 network_create.go:124] attempt to create docker network embed-certs-618070 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 10:34:38.411666  465703 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-618070 embed-certs-618070
	I1101 10:34:38.468023  465703 network_create.go:108] docker network embed-certs-618070 192.168.85.0/24 created
	I1101 10:34:38.468057  465703 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-618070" container
	I1101 10:34:38.468129  465703 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:34:38.484593  465703 cli_runner.go:164] Run: docker volume create embed-certs-618070 --label name.minikube.sigs.k8s.io=embed-certs-618070 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:34:38.503254  465703 oci.go:103] Successfully created a docker volume embed-certs-618070
	I1101 10:34:38.503349  465703 cli_runner.go:164] Run: docker run --rm --name embed-certs-618070-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-618070 --entrypoint /usr/bin/test -v embed-certs-618070:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:34:39.019388  465703 oci.go:107] Successfully prepared a docker volume embed-certs-618070
	I1101 10:34:39.019443  465703 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:34:39.019471  465703 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:34:39.019548  465703 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-618070:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:34:42.580080  464341 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:34:43.969104  464341 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.38898364s)
	I1101 10:34:43.969127  464341 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:34:43.969177  464341 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:34:43.973853  464341 start.go:564] Will wait 60s for crictl version
	I1101 10:34:43.973915  464341 ssh_runner.go:195] Run: which crictl
	I1101 10:34:43.978020  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:34:44.018746  464341 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:34:44.018849  464341 ssh_runner.go:195] Run: crio --version
	I1101 10:34:44.059068  464341 ssh_runner.go:195] Run: crio --version
	I1101 10:34:44.107083  464341 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:34:44.110112  464341 cli_runner.go:164] Run: docker network inspect no-preload-170467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:34:44.131429  464341 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:34:44.135742  464341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:34:44.159086  464341 kubeadm.go:884] updating cluster {Name:no-preload-170467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-170467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:34:44.159197  464341 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:34:44.159237  464341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:34:44.193532  464341 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 10:34:44.193558  464341 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 10:34:44.193611  464341 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:34:44.193838  464341 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:34:44.193974  464341 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:34:44.194075  464341 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:34:44.194166  464341 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:34:44.194279  464341 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 10:34:44.194372  464341 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:34:44.194488  464341 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:34:44.196764  464341 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:34:44.198708  464341 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:34:44.199442  464341 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:34:44.199651  464341 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:34:44.199817  464341 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:34:44.200438  464341 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 10:34:44.200635  464341 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:34:44.200839  464341 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:34:44.419726  464341 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:34:44.422937  464341 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1101 10:34:44.432086  464341 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:34:44.436568  464341 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1101 10:34:44.438475  464341 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:34:44.441644  464341 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:34:44.441852  464341 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:34:44.896445  464341 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1101 10:34:44.896488  464341 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:34:44.896456  464341 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1101 10:34:44.896559  464341 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1101 10:34:44.896565  464341 ssh_runner.go:195] Run: which crictl
	I1101 10:34:44.896607  464341 ssh_runner.go:195] Run: which crictl
	I1101 10:34:44.938054  464341 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1101 10:34:44.938099  464341 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:34:44.938149  464341 ssh_runner.go:195] Run: which crictl
	I1101 10:34:44.938218  464341 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1101 10:34:44.938237  464341 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:34:44.938261  464341 ssh_runner.go:195] Run: which crictl
	I1101 10:34:45.040440  464341 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1101 10:34:45.040499  464341 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:34:45.040557  464341 ssh_runner.go:195] Run: which crictl
	I1101 10:34:45.040648  464341 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1101 10:34:45.040676  464341 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:34:45.040707  464341 ssh_runner.go:195] Run: which crictl
	I1101 10:34:45.040764  464341 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1101 10:34:45.040791  464341 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:34:45.040821  464341 ssh_runner.go:195] Run: which crictl
	I1101 10:34:45.040900  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:34:45.040952  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:34:45.041044  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:34:45.041099  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:34:45.244687  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:34:45.244754  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:34:45.244790  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:34:45.244846  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:34:45.244885  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:34:45.244988  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:34:45.245216  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:34:45.536591  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:34:45.536680  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:34:45.536736  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:34:45.536790  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:34:45.536842  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:34:45.536900  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:34:45.536952  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:34:45.723477  464341 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 10:34:45.723657  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:34:45.723769  464341 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1101 10:34:45.723860  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1101 10:34:45.723970  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:34:45.724062  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:34:45.724151  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:34:45.724230  464341 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 10:34:45.724315  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:34:45.724395  464341 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1101 10:34:45.724478  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	W1101 10:34:45.756070  464341 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1101 10:34:45.756336  464341 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:34:45.824522  464341 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1101 10:34:45.824634  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1101 10:34:45.877686  464341 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1101 10:34:45.877837  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1101 10:34:45.877945  464341 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 10:34:45.878080  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:34:45.878175  464341 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1101 10:34:45.878222  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1101 10:34:45.878312  464341 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 10:34:45.878401  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:34:45.878479  464341 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 10:34:45.878561  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:34:45.878692  464341 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1101 10:34:45.878764  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1101 10:34:45.878871  464341 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1101 10:34:45.878927  464341 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:34:45.878991  464341 ssh_runner.go:195] Run: which crictl
	I1101 10:34:45.958004  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:34:45.958128  464341 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1101 10:34:45.958271  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1101 10:34:45.958101  464341 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1101 10:34:45.958360  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1101 10:34:45.958176  464341 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1101 10:34:45.958435  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1101 10:34:45.989472  464341 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1101 10:34:45.989597  464341 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1101 10:34:45.998305  464341 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1101 10:34:45.998415  464341 retry.go:31] will retry after 205.333604ms: ssh: rejected: connect failed (open failed)
	I1101 10:34:46.130452  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:34:46.130538  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:34:46.165824  464341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:34:46.204160  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:34:46.247018  464341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:34:46.589172  464341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:34:46.589243  464341 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1101 10:34:46.709428  464341 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:34:46.709677  464341 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:34:46.759898  464341 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 10:34:46.760081  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:34:43.854644  465703 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-618070:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.835059067s)
	I1101 10:34:43.854676  465703 kic.go:203] duration metric: took 4.835201651s to extract preloaded images to volume ...
	W1101 10:34:43.854813  465703 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:34:43.854930  465703 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:34:43.954103  465703 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-618070 --name embed-certs-618070 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-618070 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-618070 --network embed-certs-618070 --ip 192.168.85.2 --volume embed-certs-618070:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:34:44.330543  465703 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Running}}
	I1101 10:34:44.352156  465703 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:34:44.380950  465703 cli_runner.go:164] Run: docker exec embed-certs-618070 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:34:44.449545  465703 oci.go:144] the created container "embed-certs-618070" has a running status.
	I1101 10:34:44.449576  465703 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa...
	I1101 10:34:44.949565  465703 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:34:44.972030  465703 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:34:44.991657  465703 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:34:44.991677  465703 kic_runner.go:114] Args: [docker exec --privileged embed-certs-618070 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:34:45.071856  465703 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:34:45.099944  465703 machine.go:94] provisionDockerMachine start ...
	I1101 10:34:45.100054  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:34:45.126002  465703 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:45.126361  465703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33425 <nil> <nil>}
	I1101 10:34:45.126372  465703 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:34:45.127194  465703 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:34:48.281932  465703 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-618070
	
	I1101 10:34:48.281959  465703 ubuntu.go:182] provisioning hostname "embed-certs-618070"
	I1101 10:34:48.282042  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:34:48.307322  465703 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:48.307640  465703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33425 <nil> <nil>}
	I1101 10:34:48.307653  465703 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-618070 && echo "embed-certs-618070" | sudo tee /etc/hostname
	I1101 10:34:48.476066  465703 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-618070
	
	I1101 10:34:48.476145  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:34:48.495554  465703 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:48.495865  465703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33425 <nil> <nil>}
	I1101 10:34:48.495892  465703 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-618070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-618070/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-618070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:34:48.654138  465703 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:34:48.654166  465703 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:34:48.654198  465703 ubuntu.go:190] setting up certificates
	I1101 10:34:48.654208  465703 provision.go:84] configureAuth start
	I1101 10:34:48.654273  465703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-618070
	I1101 10:34:48.674629  465703 provision.go:143] copyHostCerts
	I1101 10:34:48.674699  465703 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:34:48.674709  465703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:34:48.677228  465703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:34:48.677398  465703 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:34:48.677411  465703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:34:48.677456  465703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:34:48.677531  465703 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:34:48.677540  465703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:34:48.677572  465703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:34:48.677636  465703 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.embed-certs-618070 san=[127.0.0.1 192.168.85.2 embed-certs-618070 localhost minikube]
	I1101 10:34:48.910048  465703 provision.go:177] copyRemoteCerts
	I1101 10:34:48.910124  465703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:34:48.910171  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:34:48.931594  465703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:34:49.038737  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 10:34:49.059364  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:34:49.079930  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:34:49.100211  465703 provision.go:87] duration metric: took 445.977715ms to configureAuth
	I1101 10:34:49.100289  465703 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:34:49.100515  465703 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:34:49.100727  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:34:49.123362  465703 main.go:143] libmachine: Using SSH client type: native
	I1101 10:34:49.123683  465703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33425 <nil> <nil>}
	I1101 10:34:49.123698  465703 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:34:49.421222  465703 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:34:49.421316  465703 machine.go:97] duration metric: took 4.32135108s to provisionDockerMachine
	I1101 10:34:49.421346  465703 client.go:176] duration metric: took 11.061764457s to LocalClient.Create
	I1101 10:34:49.421412  465703 start.go:167] duration metric: took 11.061855879s to libmachine.API.Create "embed-certs-618070"
	I1101 10:34:49.421440  465703 start.go:293] postStartSetup for "embed-certs-618070" (driver="docker")
	I1101 10:34:49.421463  465703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:34:49.421558  465703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:34:49.421629  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:34:49.459384  465703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:34:49.574166  465703 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:34:49.578451  465703 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:34:49.578479  465703 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:34:49.578491  465703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:34:49.578548  465703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:34:49.578631  465703 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:34:49.578732  465703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:34:49.587655  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:34:49.606338  465703 start.go:296] duration metric: took 184.870764ms for postStartSetup
	I1101 10:34:49.606769  465703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-618070
	I1101 10:34:49.624338  465703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/config.json ...
	I1101 10:34:49.624619  465703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:34:49.624668  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:34:49.646930  465703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:34:49.751008  465703 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:34:49.757854  465703 start.go:128] duration metric: took 11.401996294s to createHost
	I1101 10:34:49.757886  465703 start.go:83] releasing machines lock for "embed-certs-618070", held for 11.402122015s
	I1101 10:34:49.757971  465703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-618070
	I1101 10:34:49.779320  465703 ssh_runner.go:195] Run: cat /version.json
	I1101 10:34:49.779374  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:34:49.779674  465703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:34:49.779730  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:34:49.814736  465703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:34:49.818283  465703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:34:50.037974  465703 ssh_runner.go:195] Run: systemctl --version
	I1101 10:34:50.046331  465703 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:34:50.107921  465703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:34:50.113507  465703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:34:50.113652  465703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:34:50.146077  465703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:34:50.146153  465703 start.go:496] detecting cgroup driver to use...
	I1101 10:34:50.146201  465703 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:34:50.146290  465703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:34:50.167527  465703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:34:50.182016  465703 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:34:50.182131  465703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:34:50.203248  465703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:34:50.223309  465703 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:34:50.379278  465703 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:34:50.538120  465703 docker.go:234] disabling docker service ...
	I1101 10:34:50.538189  465703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:34:50.564312  465703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:34:50.579331  465703 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:34:50.736847  465703 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:34:50.891733  465703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:34:50.908485  465703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:34:50.936572  465703 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:34:50.936636  465703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:50.954925  465703 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:34:50.955004  465703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:50.966858  465703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:50.985032  465703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:50.999931  465703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:34:51.024989  465703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:51.040329  465703 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:51.057843  465703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:34:51.074189  465703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:34:51.082857  465703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:34:51.091318  465703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:34:51.296125  465703 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:34:52.161089  465703 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:34:52.161303  465703 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:34:52.166390  465703 start.go:564] Will wait 60s for crictl version
	I1101 10:34:52.166512  465703 ssh_runner.go:195] Run: which crictl
	I1101 10:34:52.170532  465703 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:34:52.208779  465703 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:34:52.208935  465703 ssh_runner.go:195] Run: crio --version
	I1101 10:34:52.239922  465703 ssh_runner.go:195] Run: crio --version
	I1101 10:34:52.278824  465703 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:34:48.682557  464341 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.972793449s)
	I1101 10:34:48.682578  464341 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1101 10:34:48.682595  464341 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:34:48.682638  464341 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:34:48.682698  464341 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.922586561s)
	I1101 10:34:48.682712  464341 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1101 10:34:48.682726  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1101 10:34:50.230526  464341 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.547868018s)
	I1101 10:34:50.230549  464341 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1101 10:34:50.230566  464341 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:34:50.230612  464341 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:34:52.519956  464341 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.289325773s)
	I1101 10:34:52.519979  464341 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1101 10:34:52.519996  464341 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:34:52.520043  464341 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:34:52.281906  465703 cli_runner.go:164] Run: docker network inspect embed-certs-618070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:34:52.298866  465703 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:34:52.302975  465703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:34:52.313407  465703 kubeadm.go:884] updating cluster {Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:34:52.313608  465703 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:34:52.313727  465703 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:34:52.352192  465703 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:34:52.352271  465703 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:34:52.352362  465703 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:34:52.388067  465703 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:34:52.388129  465703 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:34:52.388163  465703 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:34:52.388291  465703 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-618070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:34:52.388408  465703 ssh_runner.go:195] Run: crio config
	I1101 10:34:52.458570  465703 cni.go:84] Creating CNI manager for ""
	I1101 10:34:52.458593  465703 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:34:52.458608  465703 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:34:52.458653  465703 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-618070 NodeName:embed-certs-618070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:34:52.458813  465703 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-618070"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:34:52.458935  465703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:34:52.468534  465703 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:34:52.468600  465703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:34:52.478716  465703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 10:34:52.494642  465703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:34:52.518641  465703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 10:34:52.533480  465703 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:34:52.537654  465703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:34:52.548224  465703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:34:52.682732  465703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:34:52.700435  465703 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070 for IP: 192.168.85.2
	I1101 10:34:52.700506  465703 certs.go:195] generating shared ca certs ...
	I1101 10:34:52.700539  465703 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:52.700729  465703 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:34:52.700806  465703 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:34:52.700849  465703 certs.go:257] generating profile certs ...
	I1101 10:34:52.700947  465703 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/client.key
	I1101 10:34:52.700981  465703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/client.crt with IP's: []
	I1101 10:34:53.155594  465703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/client.crt ...
	I1101 10:34:53.155669  465703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/client.crt: {Name:mkd4bee0ac20ef0771d67b145a062e53a025d86c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:53.155890  465703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/client.key ...
	I1101 10:34:53.155928  465703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/client.key: {Name:mke7ce5088fbdaa8cb357d204749214a0e44ecd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:53.156111  465703 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.key.eb801fed
	I1101 10:34:53.156169  465703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.crt.eb801fed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:34:53.514172  465703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.crt.eb801fed ...
	I1101 10:34:53.514246  465703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.crt.eb801fed: {Name:mk5b619514824a9df4a10b8d5265d6d304d48310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:53.514475  465703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.key.eb801fed ...
	I1101 10:34:53.514488  465703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.key.eb801fed: {Name:mk2d303e7cc3817686114c994419bf26825594b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:53.514564  465703 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.crt.eb801fed -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.crt
	I1101 10:34:53.514637  465703 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.key.eb801fed -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.key
	I1101 10:34:53.514690  465703 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.key
	I1101 10:34:53.514705  465703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.crt with IP's: []
	I1101 10:34:54.134795  465703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.crt ...
	I1101 10:34:54.134867  465703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.crt: {Name:mkb50628c037e24c7c9f6a3362b67d92739f9df3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:54.135067  465703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.key ...
	I1101 10:34:54.135082  465703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.key: {Name:mke0aaf5570e63242c0a0b4777eea5a93c319353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:54.135258  465703 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:34:54.135296  465703 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:34:54.135305  465703 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:34:54.135338  465703 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:34:54.135361  465703 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:34:54.135386  465703 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:34:54.135432  465703 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:34:54.136061  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:34:54.155318  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:34:54.173888  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:34:54.193511  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:34:54.211939  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:34:54.230304  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:34:54.248422  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:34:54.266473  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:34:54.284888  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:34:54.303482  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:34:54.321893  465703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:34:54.341039  465703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:34:54.356634  465703 ssh_runner.go:195] Run: openssl version
	I1101 10:34:54.363822  465703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:34:54.373011  465703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:34:54.379899  465703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:34:54.380016  465703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:34:54.433274  465703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:34:54.445874  465703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:34:54.457216  465703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:34:54.463546  465703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:34:54.463627  465703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:34:54.509765  465703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:34:54.522634  465703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:34:54.534157  465703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:34:54.538188  465703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:34:54.538276  465703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:34:54.580552  465703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:34:54.589559  465703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:34:54.593489  465703 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:34:54.593542  465703 kubeadm.go:401] StartCluster: {Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:34:54.593612  465703 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:34:54.593681  465703 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:34:54.629153  465703 cri.go:89] found id: ""
	I1101 10:34:54.629281  465703 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:34:54.640187  465703 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:34:54.650140  465703 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:34:54.650245  465703 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:34:54.679921  465703 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:34:54.679991  465703 kubeadm.go:158] found existing configuration files:
	
	I1101 10:34:54.680058  465703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:34:54.699738  465703 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:34:54.699818  465703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:34:54.720346  465703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:34:54.730402  465703 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:34:54.730488  465703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:34:54.738749  465703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:34:54.747402  465703 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:34:54.747468  465703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:34:54.755045  465703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:34:54.763936  465703 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:34:54.764003  465703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:34:54.771611  465703 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:34:54.819640  465703 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:34:54.820024  465703 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:34:54.850141  465703 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:34:54.850224  465703 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:34:54.850266  465703 kubeadm.go:319] OS: Linux
	I1101 10:34:54.850317  465703 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:34:54.850372  465703 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:34:54.850425  465703 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:34:54.850479  465703 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:34:54.850534  465703 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:34:54.850588  465703 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:34:54.850639  465703 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:34:54.850693  465703 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:34:54.850744  465703 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:34:54.939586  465703 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:34:54.939718  465703 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:34:54.939828  465703 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:34:54.956628  465703 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:34:54.621566  464341 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.101502375s)
	I1101 10:34:54.621591  464341 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1101 10:34:54.621620  464341 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:34:54.621672  464341 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:34:56.405664  464341 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.783970193s)
	I1101 10:34:56.405687  464341 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1101 10:34:56.405721  464341 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:34:56.405769  464341 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:34:54.962528  465703 out.go:252]   - Generating certificates and keys ...
	I1101 10:34:54.962666  465703 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:34:54.962748  465703 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:34:55.445013  465703 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:34:55.631658  465703 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:34:56.096530  465703 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:34:56.296028  465703 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:34:57.697247  465703 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:34:57.697439  465703 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-618070 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:34:57.988171  465703 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:34:57.988686  465703 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-618070 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:34:58.573256  465703 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:34:58.681648  465703 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:34:59.170973  465703 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:34:59.171494  465703 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:34:59.927551  465703 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:35:01.231691  465703 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:35:01.376407  465703 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:35:01.807882  465703 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:35:01.921845  465703 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:35:01.923123  465703 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:35:01.936638  465703 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:35:01.298149  464341 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.892358392s)
	I1101 10:35:01.298176  464341 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1101 10:35:01.298197  464341 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:35:01.298248  464341 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:35:02.060132  464341 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 10:35:02.060167  464341 cache_images.go:125] Successfully loaded all cached images
	I1101 10:35:02.060174  464341 cache_images.go:94] duration metric: took 17.866602437s to LoadCachedImages
	I1101 10:35:02.060185  464341 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:35:02.060274  464341 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-170467 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-170467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:35:02.060357  464341 ssh_runner.go:195] Run: crio config
	I1101 10:35:02.145315  464341 cni.go:84] Creating CNI manager for ""
	I1101 10:35:02.145339  464341 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:35:02.145355  464341 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:35:02.145378  464341 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-170467 NodeName:no-preload-170467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:35:02.145515  464341 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-170467"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:35:02.145583  464341 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:35:02.156509  464341 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1101 10:35:02.156576  464341 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1101 10:35:02.166527  464341 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1101 10:35:02.166587  464341 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1101 10:35:02.166650  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1101 10:35:02.166783  464341 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1101 10:35:02.172654  464341 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1101 10:35:02.172692  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1101 10:35:01.940776  465703 out.go:252]   - Booting up control plane ...
	I1101 10:35:01.940893  465703 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:35:01.940984  465703 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:35:01.942421  465703 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:35:01.988902  465703 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:35:01.989024  465703 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:35:01.998149  465703 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:35:01.998263  465703 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:35:01.998311  465703 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:35:02.166269  465703 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:35:02.166395  465703 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:35:02.960964  464341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:35:02.988333  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1101 10:35:03.008427  464341 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1101 10:35:03.008701  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1101 10:35:03.359499  464341 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1101 10:35:03.377317  464341 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1101 10:35:03.377499  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1101 10:35:03.859960  464341 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:35:03.868549  464341 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:35:03.883635  464341 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:35:03.898587  464341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 10:35:03.913684  464341 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:35:03.917981  464341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:35:03.928332  464341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:35:04.121105  464341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:35:04.141658  464341 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467 for IP: 192.168.76.2
	I1101 10:35:04.141680  464341 certs.go:195] generating shared ca certs ...
	I1101 10:35:04.141739  464341 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:04.141884  464341 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:35:04.141929  464341 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:35:04.141940  464341 certs.go:257] generating profile certs ...
	I1101 10:35:04.141995  464341 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.key
	I1101 10:35:04.142008  464341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.crt with IP's: []
	I1101 10:35:04.723803  464341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.crt ...
	I1101 10:35:04.723839  464341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.crt: {Name:mka32ef26a7e616306015bffc000fe0f16b651a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:04.724077  464341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.key ...
	I1101 10:35:04.724096  464341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.key: {Name:mked8f03227c207af6651d9c2de03da21d042e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:04.724234  464341 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.key.cec5ff1a
	I1101 10:35:04.724256  464341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.crt.cec5ff1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:35:04.855100  464341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.crt.cec5ff1a ...
	I1101 10:35:04.855133  464341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.crt.cec5ff1a: {Name:mk749a17e2095bc18632be4c9f00d61d1d854aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:04.855337  464341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.key.cec5ff1a ...
	I1101 10:35:04.855357  464341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.key.cec5ff1a: {Name:mk815639c8a5f46244136dbf533df5f4257f43c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:04.855486  464341 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.crt.cec5ff1a -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.crt
	I1101 10:35:04.855606  464341 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.key.cec5ff1a -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.key
	I1101 10:35:04.855695  464341 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/proxy-client.key
	I1101 10:35:04.855739  464341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/proxy-client.crt with IP's: []
	I1101 10:35:05.338208  464341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/proxy-client.crt ...
	I1101 10:35:05.338241  464341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/proxy-client.crt: {Name:mke8f793b1895466a80c1f796b352774f62160bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:05.338460  464341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/proxy-client.key ...
	I1101 10:35:05.338480  464341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/proxy-client.key: {Name:mk888f581afbc46a36e94e62dce7577288b46e03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:05.338718  464341 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:35:05.338782  464341 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:35:05.338798  464341 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:35:05.338838  464341 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:35:05.338884  464341 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:35:05.338914  464341 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:35:05.338979  464341 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:35:05.339607  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:35:05.370138  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:35:05.404409  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:35:05.437309  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:35:05.469368  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:35:05.503419  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:35:05.530712  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:35:05.557016  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:35:05.590471  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:35:05.624854  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:35:05.663826  464341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:35:05.700516  464341 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:35:05.736462  464341 ssh_runner.go:195] Run: openssl version
	I1101 10:35:05.751722  464341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:35:05.765244  464341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:35:05.771632  464341 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:35:05.771728  464341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:35:05.835720  464341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:35:05.845159  464341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:35:05.863364  464341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:35:05.867549  464341 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:35:05.867635  464341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:35:05.909845  464341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:35:05.919397  464341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:35:05.928900  464341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:35:05.933744  464341 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:35:05.933862  464341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:35:05.976746  464341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:35:05.986591  464341 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:35:05.992336  464341 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:35:05.992437  464341 kubeadm.go:401] StartCluster: {Name:no-preload-170467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-170467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:35:05.992535  464341 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:35:05.992617  464341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:35:06.060885  464341 cri.go:89] found id: ""
	I1101 10:35:06.061023  464341 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:35:06.077916  464341 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:35:06.091836  464341 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:35:06.091950  464341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:35:06.107195  464341 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:35:06.107264  464341 kubeadm.go:158] found existing configuration files:
	
	I1101 10:35:06.107345  464341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:35:06.119438  464341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:35:06.119551  464341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:35:06.131686  464341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:35:06.143981  464341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:35:06.144065  464341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:35:06.155077  464341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:35:06.168677  464341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:35:06.168780  464341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:35:06.179758  464341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:35:06.193194  464341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:35:06.193306  464341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:35:06.210170  464341 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:35:06.274412  464341 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:35:06.278171  464341 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:35:06.332433  464341 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:35:06.332770  464341 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:35:06.332860  464341 kubeadm.go:319] OS: Linux
	I1101 10:35:06.332942  464341 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:35:06.333033  464341 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:35:06.333119  464341 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:35:06.333200  464341 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:35:06.333298  464341 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:35:06.333383  464341 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:35:06.333447  464341 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:35:06.333502  464341 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:35:06.333555  464341 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:35:06.458215  464341 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:35:06.458394  464341 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:35:06.458548  464341 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:35:06.490295  464341 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:35:06.497159  464341 out.go:252]   - Generating certificates and keys ...
	I1101 10:35:06.497314  464341 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:35:06.497413  464341 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:35:07.044952  464341 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:35:07.490726  464341 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:35:04.166019  465703 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000887977s
	I1101 10:35:04.177933  465703 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:35:04.178036  465703 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:35:04.178129  465703 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:35:04.178210  465703 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:35:08.126726  464341 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:35:08.620391  464341 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:35:09.913758  464341 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:35:09.915202  464341 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-170467] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:35:10.018045  464341 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:35:10.022897  464341 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-170467] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:35:10.343332  464341 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:35:11.066307  464341 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:35:11.221594  464341 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:35:11.222161  464341 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:35:11.601297  464341 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:35:11.744556  464341 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:35:11.838534  464341 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:35:12.364203  464341 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:35:12.618061  464341 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:35:12.618163  464341 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:35:12.621157  464341 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:35:10.706850  465703 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.529291141s
	I1101 10:35:13.970817  465703 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.79300154s
	I1101 10:35:16.179847  465703 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 12.002870944s
	I1101 10:35:16.210306  465703 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:35:16.233431  465703 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:35:16.255861  465703 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:35:16.256072  465703 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-618070 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:35:16.277018  465703 kubeadm.go:319] [bootstrap-token] Using token: 9i1p7u.pf9j2yldksslfdm6
	I1101 10:35:12.626050  464341 out.go:252]   - Booting up control plane ...
	I1101 10:35:12.626166  464341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:35:12.626248  464341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:35:12.627161  464341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:35:12.654848  464341 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:35:12.654961  464341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:35:12.666040  464341 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:35:12.666144  464341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:35:12.666186  464341 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:35:12.885205  464341 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:35:12.885329  464341 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:35:13.890209  464341 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001285255s
	I1101 10:35:13.890322  464341 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:35:13.890407  464341 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 10:35:13.890500  464341 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:35:13.890582  464341 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:35:16.279901  465703 out.go:252]   - Configuring RBAC rules ...
	I1101 10:35:16.280025  465703 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:35:16.291740  465703 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:35:16.303417  465703 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:35:16.312663  465703 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:35:16.323531  465703 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:35:16.329928  465703 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:35:16.587661  465703 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:35:17.188059  465703 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:35:17.617144  465703 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:35:17.618276  465703 kubeadm.go:319] 
	I1101 10:35:17.618361  465703 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:35:17.618368  465703 kubeadm.go:319] 
	I1101 10:35:17.618449  465703 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:35:17.618454  465703 kubeadm.go:319] 
	I1101 10:35:17.618481  465703 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:35:17.618542  465703 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:35:17.618595  465703 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:35:17.618600  465703 kubeadm.go:319] 
	I1101 10:35:17.618656  465703 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:35:17.618661  465703 kubeadm.go:319] 
	I1101 10:35:17.618710  465703 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:35:17.618715  465703 kubeadm.go:319] 
	I1101 10:35:17.618769  465703 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:35:17.618847  465703 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:35:17.618921  465703 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:35:17.618933  465703 kubeadm.go:319] 
	I1101 10:35:17.619023  465703 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:35:17.619103  465703 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:35:17.619116  465703 kubeadm.go:319] 
	I1101 10:35:17.619203  465703 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9i1p7u.pf9j2yldksslfdm6 \
	I1101 10:35:17.619322  465703 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 10:35:17.619344  465703 kubeadm.go:319] 	--control-plane 
	I1101 10:35:17.619349  465703 kubeadm.go:319] 
	I1101 10:35:17.619437  465703 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:35:17.619441  465703 kubeadm.go:319] 
	I1101 10:35:17.619527  465703 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9i1p7u.pf9j2yldksslfdm6 \
	I1101 10:35:17.619633  465703 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 10:35:17.633443  465703 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:35:17.633688  465703 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:35:17.633829  465703 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:35:17.633845  465703 cni.go:84] Creating CNI manager for ""
	I1101 10:35:17.633853  465703 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:35:17.638962  465703 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:35:17.642012  465703 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:35:17.654480  465703 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:35:17.654498  465703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:35:17.712805  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:35:20.149076  464341 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.258987629s
	I1101 10:35:18.336374  465703 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:35:18.336506  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:18.336556  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-618070 minikube.k8s.io/updated_at=2025_11_01T10_35_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=embed-certs-618070 minikube.k8s.io/primary=true
	I1101 10:35:18.710646  465703 ops.go:34] apiserver oom_adj: -16
	I1101 10:35:18.710786  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:19.211417  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:19.711259  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:20.211593  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:20.711579  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:21.211771  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:21.711702  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:22.211151  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:22.711432  465703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:22.908126  465703 kubeadm.go:1114] duration metric: took 4.571672365s to wait for elevateKubeSystemPrivileges
	I1101 10:35:22.908153  465703 kubeadm.go:403] duration metric: took 28.314615379s to StartCluster
	I1101 10:35:22.908170  465703 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:22.908232  465703 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:35:22.909250  465703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:22.909466  465703 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:35:22.909632  465703 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:35:22.909931  465703 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:35:22.910121  465703 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:35:22.910193  465703 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-618070"
	I1101 10:35:22.910218  465703 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-618070"
	I1101 10:35:22.910254  465703 host.go:66] Checking if "embed-certs-618070" exists ...
	I1101 10:35:22.910742  465703 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:35:22.910939  465703 addons.go:70] Setting default-storageclass=true in profile "embed-certs-618070"
	I1101 10:35:22.910982  465703 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-618070"
	I1101 10:35:22.911306  465703 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:35:22.915135  465703 out.go:179] * Verifying Kubernetes components...
	I1101 10:35:22.918328  465703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:35:22.962119  465703 addons.go:239] Setting addon default-storageclass=true in "embed-certs-618070"
	I1101 10:35:22.962164  465703 host.go:66] Checking if "embed-certs-618070" exists ...
	I1101 10:35:22.962604  465703 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:35:22.963635  465703 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:35:22.966747  465703 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:35:22.966770  465703 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:35:22.966840  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:35:22.993976  465703 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:35:22.993998  465703 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:35:22.994064  465703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:35:23.007563  465703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:35:23.030397  465703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:35:23.573388  464341 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.683780664s
	I1101 10:35:23.891495  464341 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.001708783s
	I1101 10:35:23.922472  464341 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:35:23.945099  464341 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:35:23.963738  464341 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:35:23.964175  464341 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-170467 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:35:23.988919  464341 kubeadm.go:319] [bootstrap-token] Using token: 2i8v22.8v291woob1fhi6pp
	I1101 10:35:23.691532  465703 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:35:23.738677  465703 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:35:23.771625  465703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:35:23.771828  465703 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:35:24.810985  465703 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.039331934s)
	I1101 10:35:24.811929  465703 node_ready.go:35] waiting up to 6m0s for node "embed-certs-618070" to be "Ready" ...
	I1101 10:35:24.812240  465703 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.040394586s)
	I1101 10:35:24.812258  465703 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:35:24.814479  465703 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.072245726s)
	I1101 10:35:24.814979  465703 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.123416067s)
	I1101 10:35:24.884968  465703 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:35:23.991936  464341 out.go:252]   - Configuring RBAC rules ...
	I1101 10:35:23.992069  464341 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:35:24.016023  464341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:35:24.033541  464341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:35:24.048042  464341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:35:24.056382  464341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:35:24.065671  464341 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:35:24.298852  464341 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:35:24.762719  464341 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:35:25.298930  464341 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:35:25.300216  464341 kubeadm.go:319] 
	I1101 10:35:25.300305  464341 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:35:25.300317  464341 kubeadm.go:319] 
	I1101 10:35:25.300399  464341 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:35:25.300409  464341 kubeadm.go:319] 
	I1101 10:35:25.300435  464341 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:35:25.300501  464341 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:35:25.300557  464341 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:35:25.300565  464341 kubeadm.go:319] 
	I1101 10:35:25.300622  464341 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:35:25.300631  464341 kubeadm.go:319] 
	I1101 10:35:25.300683  464341 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:35:25.300693  464341 kubeadm.go:319] 
	I1101 10:35:25.300748  464341 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:35:25.300833  464341 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:35:25.300910  464341 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:35:25.300918  464341 kubeadm.go:319] 
	I1101 10:35:25.301007  464341 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:35:25.301092  464341 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:35:25.301101  464341 kubeadm.go:319] 
	I1101 10:35:25.301192  464341 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2i8v22.8v291woob1fhi6pp \
	I1101 10:35:25.301305  464341 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 10:35:25.301331  464341 kubeadm.go:319] 	--control-plane 
	I1101 10:35:25.301337  464341 kubeadm.go:319] 
	I1101 10:35:25.301426  464341 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:35:25.301434  464341 kubeadm.go:319] 
	I1101 10:35:25.301519  464341 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2i8v22.8v291woob1fhi6pp \
	I1101 10:35:25.301666  464341 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 10:35:25.306676  464341 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:35:25.306953  464341 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:35:25.307091  464341 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:35:25.307144  464341 cni.go:84] Creating CNI manager for ""
	I1101 10:35:25.307157  464341 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:35:25.312115  464341 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:35:25.315058  464341 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:35:25.320499  464341 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:35:25.320522  464341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:35:25.336013  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:35:25.655693  464341 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:35:25.655797  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:25.655835  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-170467 minikube.k8s.io/updated_at=2025_11_01T10_35_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=no-preload-170467 minikube.k8s.io/primary=true
	I1101 10:35:25.844784  464341 ops.go:34] apiserver oom_adj: -16
	I1101 10:35:25.844884  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:26.345897  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:26.845507  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:27.345457  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:24.887715  465703 addons.go:515] duration metric: took 1.977592108s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:35:25.316308  465703 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-618070" context rescaled to 1 replicas
	W1101 10:35:26.815490  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	I1101 10:35:27.845767  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:28.345325  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:28.845860  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:29.345666  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:29.845486  464341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:35:29.976379  464341 kubeadm.go:1114] duration metric: took 4.320644776s to wait for elevateKubeSystemPrivileges
	I1101 10:35:29.976411  464341 kubeadm.go:403] duration metric: took 23.983978067s to StartCluster
	I1101 10:35:29.976428  464341 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:29.976490  464341 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:35:29.978024  464341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:35:29.978349  464341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:35:29.978613  464341 config.go:182] Loaded profile config "no-preload-170467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:35:29.978713  464341 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:35:29.978782  464341 addons.go:70] Setting storage-provisioner=true in profile "no-preload-170467"
	I1101 10:35:29.978798  464341 addons.go:239] Setting addon storage-provisioner=true in "no-preload-170467"
	I1101 10:35:29.978823  464341 host.go:66] Checking if "no-preload-170467" exists ...
	I1101 10:35:29.979341  464341 cli_runner.go:164] Run: docker container inspect no-preload-170467 --format={{.State.Status}}
	I1101 10:35:29.978686  464341 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:35:29.979986  464341 addons.go:70] Setting default-storageclass=true in profile "no-preload-170467"
	I1101 10:35:29.980016  464341 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-170467"
	I1101 10:35:29.980289  464341 cli_runner.go:164] Run: docker container inspect no-preload-170467 --format={{.State.Status}}
	I1101 10:35:29.984776  464341 out.go:179] * Verifying Kubernetes components...
	I1101 10:35:29.988599  464341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:35:30.021048  464341 addons.go:239] Setting addon default-storageclass=true in "no-preload-170467"
	I1101 10:35:30.021205  464341 host.go:66] Checking if "no-preload-170467" exists ...
	I1101 10:35:30.021722  464341 cli_runner.go:164] Run: docker container inspect no-preload-170467 --format={{.State.Status}}
	I1101 10:35:30.037020  464341 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:35:30.042251  464341 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:35:30.042284  464341 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:35:30.042374  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:35:30.091269  464341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:35:30.094259  464341 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:35:30.094281  464341 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:35:30.094348  464341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:35:30.134125  464341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:35:30.388598  464341 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:35:30.423721  464341 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:35:30.473733  464341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:35:30.473916  464341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:35:31.386527  464341 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 10:35:31.388962  464341 node_ready.go:35] waiting up to 6m0s for node "no-preload-170467" to be "Ready" ...
	I1101 10:35:31.449433  464341 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:35:31.452393  464341 addons.go:515] duration metric: took 1.473658232s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:35:31.890531  464341 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-170467" context rescaled to 1 replicas
	W1101 10:35:29.315657  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	W1101 10:35:31.815540  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	W1101 10:35:33.393614  464341 node_ready.go:57] node "no-preload-170467" has "Ready":"False" status (will retry)
	W1101 10:35:35.893281  464341 node_ready.go:57] node "no-preload-170467" has "Ready":"False" status (will retry)
	W1101 10:35:33.815594  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	W1101 10:35:36.315483  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	W1101 10:35:38.393267  464341 node_ready.go:57] node "no-preload-170467" has "Ready":"False" status (will retry)
	W1101 10:35:40.892874  464341 node_ready.go:57] node "no-preload-170467" has "Ready":"False" status (will retry)
	W1101 10:35:38.315899  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	W1101 10:35:40.815543  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	W1101 10:35:42.893666  464341 node_ready.go:57] node "no-preload-170467" has "Ready":"False" status (will retry)
	I1101 10:35:44.394408  464341 node_ready.go:49] node "no-preload-170467" is "Ready"
	I1101 10:35:44.394434  464341 node_ready.go:38] duration metric: took 13.004774335s for node "no-preload-170467" to be "Ready" ...
	I1101 10:35:44.394448  464341 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:35:44.394506  464341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:35:44.420810  464341 api_server.go:72] duration metric: took 14.441266647s to wait for apiserver process to appear ...
	I1101 10:35:44.420832  464341 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:35:44.420850  464341 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:35:44.432993  464341 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:35:44.434946  464341 api_server.go:141] control plane version: v1.34.1
	I1101 10:35:44.434973  464341 api_server.go:131] duration metric: took 14.135185ms to wait for apiserver health ...
	I1101 10:35:44.434982  464341 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:35:44.451168  464341 system_pods.go:59] 8 kube-system pods found
	I1101 10:35:44.451202  464341 system_pods.go:61] "coredns-66bc5c9577-f8tc4" [a1ca4576-5984-4992-8da9-de18b36fda4e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:35:44.451210  464341 system_pods.go:61] "etcd-no-preload-170467" [0dc7688a-3dfc-4934-ba0d-1027d9378cf2] Running
	I1101 10:35:44.451216  464341 system_pods.go:61] "kindnet-5n4vx" [cea402c3-b0d0-4f78-a280-78a5f2b96cd8] Running
	I1101 10:35:44.451221  464341 system_pods.go:61] "kube-apiserver-no-preload-170467" [c4d4f6c1-45d9-4c77-abb3-41e1093c35f9] Running
	I1101 10:35:44.451226  464341 system_pods.go:61] "kube-controller-manager-no-preload-170467" [0186af64-7998-4278-aaad-7f94d5206933] Running
	I1101 10:35:44.451230  464341 system_pods.go:61] "kube-proxy-8fvnf" [175e8939-23b3-40f6-8a2a-50617c44de73] Running
	I1101 10:35:44.451236  464341 system_pods.go:61] "kube-scheduler-no-preload-170467" [0e5d8d2d-e1f4-4144-91af-aba8ddf97489] Running
	I1101 10:35:44.451242  464341 system_pods.go:61] "storage-provisioner" [3a44a6d4-6c61-4a30-a833-7cd0b05ded40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:35:44.451248  464341 system_pods.go:74] duration metric: took 16.259808ms to wait for pod list to return data ...
	I1101 10:35:44.451256  464341 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:35:44.455049  464341 default_sa.go:45] found service account: "default"
	I1101 10:35:44.455132  464341 default_sa.go:55] duration metric: took 3.868708ms for default service account to be created ...
	I1101 10:35:44.455156  464341 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:35:44.553681  464341 system_pods.go:86] 8 kube-system pods found
	I1101 10:35:44.553753  464341 system_pods.go:89] "coredns-66bc5c9577-f8tc4" [a1ca4576-5984-4992-8da9-de18b36fda4e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:35:44.553761  464341 system_pods.go:89] "etcd-no-preload-170467" [0dc7688a-3dfc-4934-ba0d-1027d9378cf2] Running
	I1101 10:35:44.553767  464341 system_pods.go:89] "kindnet-5n4vx" [cea402c3-b0d0-4f78-a280-78a5f2b96cd8] Running
	I1101 10:35:44.553772  464341 system_pods.go:89] "kube-apiserver-no-preload-170467" [c4d4f6c1-45d9-4c77-abb3-41e1093c35f9] Running
	I1101 10:35:44.553776  464341 system_pods.go:89] "kube-controller-manager-no-preload-170467" [0186af64-7998-4278-aaad-7f94d5206933] Running
	I1101 10:35:44.553780  464341 system_pods.go:89] "kube-proxy-8fvnf" [175e8939-23b3-40f6-8a2a-50617c44de73] Running
	I1101 10:35:44.553784  464341 system_pods.go:89] "kube-scheduler-no-preload-170467" [0e5d8d2d-e1f4-4144-91af-aba8ddf97489] Running
	I1101 10:35:44.553790  464341 system_pods.go:89] "storage-provisioner" [3a44a6d4-6c61-4a30-a833-7cd0b05ded40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:35:44.553811  464341 retry.go:31] will retry after 230.299791ms: missing components: kube-dns
	I1101 10:35:44.789221  464341 system_pods.go:86] 8 kube-system pods found
	I1101 10:35:44.789257  464341 system_pods.go:89] "coredns-66bc5c9577-f8tc4" [a1ca4576-5984-4992-8da9-de18b36fda4e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:35:44.789264  464341 system_pods.go:89] "etcd-no-preload-170467" [0dc7688a-3dfc-4934-ba0d-1027d9378cf2] Running
	I1101 10:35:44.789270  464341 system_pods.go:89] "kindnet-5n4vx" [cea402c3-b0d0-4f78-a280-78a5f2b96cd8] Running
	I1101 10:35:44.789276  464341 system_pods.go:89] "kube-apiserver-no-preload-170467" [c4d4f6c1-45d9-4c77-abb3-41e1093c35f9] Running
	I1101 10:35:44.789281  464341 system_pods.go:89] "kube-controller-manager-no-preload-170467" [0186af64-7998-4278-aaad-7f94d5206933] Running
	I1101 10:35:44.789285  464341 system_pods.go:89] "kube-proxy-8fvnf" [175e8939-23b3-40f6-8a2a-50617c44de73] Running
	I1101 10:35:44.789289  464341 system_pods.go:89] "kube-scheduler-no-preload-170467" [0e5d8d2d-e1f4-4144-91af-aba8ddf97489] Running
	I1101 10:35:44.789294  464341 system_pods.go:89] "storage-provisioner" [3a44a6d4-6c61-4a30-a833-7cd0b05ded40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:35:44.789310  464341 retry.go:31] will retry after 302.279088ms: missing components: kube-dns
	I1101 10:35:45.095855  464341 system_pods.go:86] 8 kube-system pods found
	I1101 10:35:45.095900  464341 system_pods.go:89] "coredns-66bc5c9577-f8tc4" [a1ca4576-5984-4992-8da9-de18b36fda4e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:35:45.095909  464341 system_pods.go:89] "etcd-no-preload-170467" [0dc7688a-3dfc-4934-ba0d-1027d9378cf2] Running
	I1101 10:35:45.095916  464341 system_pods.go:89] "kindnet-5n4vx" [cea402c3-b0d0-4f78-a280-78a5f2b96cd8] Running
	I1101 10:35:45.095921  464341 system_pods.go:89] "kube-apiserver-no-preload-170467" [c4d4f6c1-45d9-4c77-abb3-41e1093c35f9] Running
	I1101 10:35:45.095925  464341 system_pods.go:89] "kube-controller-manager-no-preload-170467" [0186af64-7998-4278-aaad-7f94d5206933] Running
	I1101 10:35:45.095929  464341 system_pods.go:89] "kube-proxy-8fvnf" [175e8939-23b3-40f6-8a2a-50617c44de73] Running
	I1101 10:35:45.095933  464341 system_pods.go:89] "kube-scheduler-no-preload-170467" [0e5d8d2d-e1f4-4144-91af-aba8ddf97489] Running
	I1101 10:35:45.095940  464341 system_pods.go:89] "storage-provisioner" [3a44a6d4-6c61-4a30-a833-7cd0b05ded40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:35:45.095962  464341 retry.go:31] will retry after 314.489035ms: missing components: kube-dns
	I1101 10:35:45.416943  464341 system_pods.go:86] 8 kube-system pods found
	I1101 10:35:45.417043  464341 system_pods.go:89] "coredns-66bc5c9577-f8tc4" [a1ca4576-5984-4992-8da9-de18b36fda4e] Running
	I1101 10:35:45.417053  464341 system_pods.go:89] "etcd-no-preload-170467" [0dc7688a-3dfc-4934-ba0d-1027d9378cf2] Running
	I1101 10:35:45.417071  464341 system_pods.go:89] "kindnet-5n4vx" [cea402c3-b0d0-4f78-a280-78a5f2b96cd8] Running
	I1101 10:35:45.417129  464341 system_pods.go:89] "kube-apiserver-no-preload-170467" [c4d4f6c1-45d9-4c77-abb3-41e1093c35f9] Running
	I1101 10:35:45.417159  464341 system_pods.go:89] "kube-controller-manager-no-preload-170467" [0186af64-7998-4278-aaad-7f94d5206933] Running
	I1101 10:35:45.417165  464341 system_pods.go:89] "kube-proxy-8fvnf" [175e8939-23b3-40f6-8a2a-50617c44de73] Running
	I1101 10:35:45.417171  464341 system_pods.go:89] "kube-scheduler-no-preload-170467" [0e5d8d2d-e1f4-4144-91af-aba8ddf97489] Running
	I1101 10:35:45.417177  464341 system_pods.go:89] "storage-provisioner" [3a44a6d4-6c61-4a30-a833-7cd0b05ded40] Running
	I1101 10:35:45.417184  464341 system_pods.go:126] duration metric: took 962.009319ms to wait for k8s-apps to be running ...
	I1101 10:35:45.417277  464341 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:35:45.417417  464341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:35:45.436470  464341 system_svc.go:56] duration metric: took 19.199856ms WaitForService to wait for kubelet
	I1101 10:35:45.436501  464341 kubeadm.go:587] duration metric: took 15.456963413s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:35:45.436523  464341 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:35:45.439829  464341 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:35:45.439904  464341 node_conditions.go:123] node cpu capacity is 2
	I1101 10:35:45.439924  464341 node_conditions.go:105] duration metric: took 3.395237ms to run NodePressure ...
	I1101 10:35:45.439938  464341 start.go:242] waiting for startup goroutines ...
	I1101 10:35:45.439946  464341 start.go:247] waiting for cluster config update ...
	I1101 10:35:45.439958  464341 start.go:256] writing updated cluster config ...
	I1101 10:35:45.440264  464341 ssh_runner.go:195] Run: rm -f paused
	I1101 10:35:45.447430  464341 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:35:45.451698  464341 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f8tc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:45.457179  464341 pod_ready.go:94] pod "coredns-66bc5c9577-f8tc4" is "Ready"
	I1101 10:35:45.457209  464341 pod_ready.go:86] duration metric: took 5.482386ms for pod "coredns-66bc5c9577-f8tc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:45.460019  464341 pod_ready.go:83] waiting for pod "etcd-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:45.465027  464341 pod_ready.go:94] pod "etcd-no-preload-170467" is "Ready"
	I1101 10:35:45.465055  464341 pod_ready.go:86] duration metric: took 5.008571ms for pod "etcd-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:45.467849  464341 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:45.473248  464341 pod_ready.go:94] pod "kube-apiserver-no-preload-170467" is "Ready"
	I1101 10:35:45.473278  464341 pod_ready.go:86] duration metric: took 5.402574ms for pod "kube-apiserver-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:45.476011  464341 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:45.851986  464341 pod_ready.go:94] pod "kube-controller-manager-no-preload-170467" is "Ready"
	I1101 10:35:45.852017  464341 pod_ready.go:86] duration metric: took 375.935574ms for pod "kube-controller-manager-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:46.052783  464341 pod_ready.go:83] waiting for pod "kube-proxy-8fvnf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:46.452584  464341 pod_ready.go:94] pod "kube-proxy-8fvnf" is "Ready"
	I1101 10:35:46.452667  464341 pod_ready.go:86] duration metric: took 399.857237ms for pod "kube-proxy-8fvnf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:46.652208  464341 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:47.051517  464341 pod_ready.go:94] pod "kube-scheduler-no-preload-170467" is "Ready"
	I1101 10:35:47.051551  464341 pod_ready.go:86] duration metric: took 399.265298ms for pod "kube-scheduler-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:35:47.051564  464341 pod_ready.go:40] duration metric: took 1.604096639s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:35:47.106675  464341 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:35:47.111759  464341 out.go:179] * Done! kubectl is now configured to use "no-preload-170467" cluster and "default" namespace by default
	W1101 10:35:43.315113  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	W1101 10:35:45.316684  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	W1101 10:35:47.815473  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	W1101 10:35:49.815675  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	W1101 10:35:52.314748  465703 node_ready.go:57] node "embed-certs-618070" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:35:44 no-preload-170467 crio[842]: time="2025-11-01T10:35:44.460720472Z" level=info msg="Created container 88544478addbb3f43bec369fdeec6323ab1b93c82bdfbe129c540f49505c35f1: kube-system/coredns-66bc5c9577-f8tc4/coredns" id=ad68040b-8384-4754-ba65-51f52547f0da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:35:44 no-preload-170467 crio[842]: time="2025-11-01T10:35:44.463699094Z" level=info msg="Starting container: 88544478addbb3f43bec369fdeec6323ab1b93c82bdfbe129c540f49505c35f1" id=8c3827c1-4275-46c9-8dbe-d51a4a2de85e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:35:44 no-preload-170467 crio[842]: time="2025-11-01T10:35:44.468024123Z" level=info msg="Started container" PID=2491 containerID=88544478addbb3f43bec369fdeec6323ab1b93c82bdfbe129c540f49505c35f1 description=kube-system/coredns-66bc5c9577-f8tc4/coredns id=8c3827c1-4275-46c9-8dbe-d51a4a2de85e name=/runtime.v1.RuntimeService/StartContainer sandboxID=401d92a90e78bdfee0142eb0aa6a3e8073901e8ae9f0a9038466febecd4358ac
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.638346821Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9fe133f6-5e4b-43e1-87d1-e47ebaee1c9f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.638417624Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.643888203Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f9e80301e4384fce7fc7d997221c8921f16b28880861beccacbc265c073f4502 UID:63a3bfba-fa06-422e-9226-ff614dc0a6b5 NetNS:/var/run/netns/363434e3-67fb-41a5-8558-c517defb8d02 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40014668e0}] Aliases:map[]}"
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.643931675Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.655635622Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f9e80301e4384fce7fc7d997221c8921f16b28880861beccacbc265c073f4502 UID:63a3bfba-fa06-422e-9226-ff614dc0a6b5 NetNS:/var/run/netns/363434e3-67fb-41a5-8558-c517defb8d02 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40014668e0}] Aliases:map[]}"
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.655799375Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.661204574Z" level=info msg="Ran pod sandbox f9e80301e4384fce7fc7d997221c8921f16b28880861beccacbc265c073f4502 with infra container: default/busybox/POD" id=9fe133f6-5e4b-43e1-87d1-e47ebaee1c9f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.662530583Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=09da4da5-cf06-4b4a-8845-9e4d2a6ad51a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.662669244Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=09da4da5-cf06-4b4a-8845-9e4d2a6ad51a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.662709893Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=09da4da5-cf06-4b4a-8845-9e4d2a6ad51a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.663550501Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a70041e2-e044-49a8-98c4-4fa1ae0df93a name=/runtime.v1.ImageService/PullImage
	Nov 01 10:35:47 no-preload-170467 crio[842]: time="2025-11-01T10:35:47.666164307Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:35:49 no-preload-170467 crio[842]: time="2025-11-01T10:35:49.841834815Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a70041e2-e044-49a8-98c4-4fa1ae0df93a name=/runtime.v1.ImageService/PullImage
	Nov 01 10:35:49 no-preload-170467 crio[842]: time="2025-11-01T10:35:49.842775971Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f62eac1-8106-4c4d-95c5-f7e761a05b7a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:35:49 no-preload-170467 crio[842]: time="2025-11-01T10:35:49.844587184Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c3708c96-014d-4b0c-ab17-b9693947fd4c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:35:49 no-preload-170467 crio[842]: time="2025-11-01T10:35:49.850755016Z" level=info msg="Creating container: default/busybox/busybox" id=7229fbfb-89d5-4994-a900-60e1f6157fa5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:35:49 no-preload-170467 crio[842]: time="2025-11-01T10:35:49.851071954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:35:49 no-preload-170467 crio[842]: time="2025-11-01T10:35:49.860995104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:35:49 no-preload-170467 crio[842]: time="2025-11-01T10:35:49.86427052Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:35:49 no-preload-170467 crio[842]: time="2025-11-01T10:35:49.883148809Z" level=info msg="Created container 38143a50790b91e9bc5697a644df7b3bb4838f253d4ff1ad789da8d8fbfa498d: default/busybox/busybox" id=7229fbfb-89d5-4994-a900-60e1f6157fa5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:35:49 no-preload-170467 crio[842]: time="2025-11-01T10:35:49.883936977Z" level=info msg="Starting container: 38143a50790b91e9bc5697a644df7b3bb4838f253d4ff1ad789da8d8fbfa498d" id=05507d19-8ed5-4182-9124-bdb836ed6b8d name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:35:49 no-preload-170467 crio[842]: time="2025-11-01T10:35:49.88949166Z" level=info msg="Started container" PID=2550 containerID=38143a50790b91e9bc5697a644df7b3bb4838f253d4ff1ad789da8d8fbfa498d description=default/busybox/busybox id=05507d19-8ed5-4182-9124-bdb836ed6b8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9e80301e4384fce7fc7d997221c8921f16b28880861beccacbc265c073f4502
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	38143a50790b9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   f9e80301e4384       busybox                                     default
	88544478addbb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   401d92a90e78b       coredns-66bc5c9577-f8tc4                    kube-system
	1679dfd2dd54b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   b540d7b31decc       storage-provisioner                         kube-system
	461ede60b9cb8       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   ad39cfd2bf926       kindnet-5n4vx                               kube-system
	fe6aaec4170a6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      26 seconds ago      Running             kube-proxy                0                   dfe8aa84fac90       kube-proxy-8fvnf                            kube-system
	18d1ba928ce8c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      42 seconds ago      Running             kube-scheduler            0                   2884d228f66f4       kube-scheduler-no-preload-170467            kube-system
	1876a943c64c6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      42 seconds ago      Running             kube-apiserver            0                   52491734248c9       kube-apiserver-no-preload-170467            kube-system
	c90e7e50011ca       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      42 seconds ago      Running             kube-controller-manager   0                   ab8606119189f       kube-controller-manager-no-preload-170467   kube-system
	63ff52e67886b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      42 seconds ago      Running             etcd                      0                   5e6f8d76e03e5       etcd-no-preload-170467                      kube-system
	
	
	==> coredns [88544478addbb3f43bec369fdeec6323ab1b93c82bdfbe129c540f49505c35f1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51927 - 20428 "HINFO IN 7933882857681840836.319908036520801131. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.033310648s
	
	
	==> describe nodes <==
	Name:               no-preload-170467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-170467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=no-preload-170467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:35:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-170467
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:35:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:35:55 +0000   Sat, 01 Nov 2025 10:35:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:35:55 +0000   Sat, 01 Nov 2025 10:35:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:35:55 +0000   Sat, 01 Nov 2025 10:35:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:35:55 +0000   Sat, 01 Nov 2025 10:35:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-170467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                a96dd2dd-60b3-4301-a26e-0deb5b7ad5c7
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-f8tc4                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-170467                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-5n4vx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-170467             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-170467    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-8fvnf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-170467             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 44s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node no-preload-170467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node no-preload-170467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node no-preload-170467 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node no-preload-170467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node no-preload-170467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node no-preload-170467 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                node-controller  Node no-preload-170467 event: Registered Node no-preload-170467 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-170467 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [63ff52e67886ba6e122f496a3684bd831e9dfd020eb8156fa47acb4fda1f3b66] <==
	{"level":"warn","ts":"2025-11-01T10:35:18.797317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:18.829605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:18.871289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:18.944783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:18.968593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:18.991957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.010985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.025624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.045570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.077427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.113052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.185584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.250309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.293629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.323938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.369840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.395116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.424118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.464349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.484631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.507727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.553189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.590511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.636384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:19.798084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37150","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:35:57 up  2:18,  0 user,  load average: 5.80, 4.50, 3.25
	Linux no-preload-170467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [461ede60b9cb82b536310f094771b854523f1b18182294eb426b56ec96c10216] <==
	I1101 10:35:33.719573       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:35:33.719803       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:35:33.719928       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:35:33.719947       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:35:33.719961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:35:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:35:33.924476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:35:34.018847       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:35:34.018951       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:35:34.019097       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:35:34.219019       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:35:34.219048       1 metrics.go:72] Registering metrics
	I1101 10:35:34.219097       1 controller.go:711] "Syncing nftables rules"
	I1101 10:35:43.931872       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:35:43.931913       1 main.go:301] handling current node
	I1101 10:35:53.926815       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:35:53.926856       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1876a943c64c639cc63d3ea51d2f8b461164f3634bde1491775756516ce2419a] <==
	E1101 10:35:21.221493       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1101 10:35:21.265341       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:35:21.352850       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:21.353421       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:35:21.399693       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:21.407492       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:35:21.458933       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:35:21.747408       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:35:21.757900       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:35:21.757927       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:35:23.219386       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:35:23.318324       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:35:23.482768       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:35:23.507617       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 10:35:23.509040       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:35:23.523217       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:35:23.994524       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:35:24.712437       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:35:24.758081       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:35:24.785921       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:35:29.033868       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:35:29.089406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:29.098553       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:30.236705       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 10:35:55.465658       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:35724: use of closed network connection
	
	
	==> kube-controller-manager [c90e7e50011ca69e8b6e69030c81e53c1509f4e47aca4537afda1c9db26c028c] <==
	I1101 10:35:29.039255       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:35:29.041979       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:35:29.052601       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:35:29.068276       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:35:29.075161       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:35:29.075419       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:35:29.075466       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:35:29.081979       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:35:29.082267       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:35:29.083414       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:35:29.083694       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:35:29.094000       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:35:29.094126       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:35:29.094205       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:35:29.094260       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:35:29.094294       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:35:29.094320       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:35:29.098935       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:35:29.101813       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:35:29.101917       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:35:29.101998       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-170467"
	I1101 10:35:29.102089       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:35:29.107904       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:35:29.116256       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-170467" podCIDRs=["10.244.0.0/24"]
	I1101 10:35:44.104131       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fe6aaec4170a6eb739f02884c4699b92a5c15b96019dd3d095d56a10c4b25248] <==
	I1101 10:35:30.921933       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:35:31.016364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:35:31.119075       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:35:31.119371       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:35:31.119488       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:35:31.176922       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:35:31.176989       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:35:31.208569       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:35:31.208884       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:35:31.208905       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:35:31.218928       1 config.go:200] "Starting service config controller"
	I1101 10:35:31.218949       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:35:31.218982       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:35:31.218987       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:35:31.219009       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:35:31.219014       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:35:31.219037       1 config.go:309] "Starting node config controller"
	I1101 10:35:31.219042       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:35:31.321321       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:35:31.321349       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:35:31.321396       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:35:31.324218       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [18d1ba928ce8c3873a994034776d24e0b9a6a7590fc23be61f0d7622fa5fdd5a] <==
	I1101 10:35:17.449155       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:35:23.496248       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:35:23.496287       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:35:23.534378       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:35:23.534443       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:35:23.534556       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:35:23.534665       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:35:23.534701       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:35:23.534745       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:35:23.558332       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:35:23.561080       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:35:23.634785       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:35:23.634830       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:35:23.662185       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:35:29 no-preload-170467 kubelet[1999]: I1101 10:35:29.156280    1999 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: I1101 10:35:30.450300    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cea402c3-b0d0-4f78-a280-78a5f2b96cd8-xtables-lock\") pod \"kindnet-5n4vx\" (UID: \"cea402c3-b0d0-4f78-a280-78a5f2b96cd8\") " pod="kube-system/kindnet-5n4vx"
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: I1101 10:35:30.450361    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/175e8939-23b3-40f6-8a2a-50617c44de73-lib-modules\") pod \"kube-proxy-8fvnf\" (UID: \"175e8939-23b3-40f6-8a2a-50617c44de73\") " pod="kube-system/kube-proxy-8fvnf"
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: I1101 10:35:30.450385    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cea402c3-b0d0-4f78-a280-78a5f2b96cd8-cni-cfg\") pod \"kindnet-5n4vx\" (UID: \"cea402c3-b0d0-4f78-a280-78a5f2b96cd8\") " pod="kube-system/kindnet-5n4vx"
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: I1101 10:35:30.450405    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94wps\" (UniqueName: \"kubernetes.io/projected/cea402c3-b0d0-4f78-a280-78a5f2b96cd8-kube-api-access-94wps\") pod \"kindnet-5n4vx\" (UID: \"cea402c3-b0d0-4f78-a280-78a5f2b96cd8\") " pod="kube-system/kindnet-5n4vx"
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: I1101 10:35:30.450438    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cea402c3-b0d0-4f78-a280-78a5f2b96cd8-lib-modules\") pod \"kindnet-5n4vx\" (UID: \"cea402c3-b0d0-4f78-a280-78a5f2b96cd8\") " pod="kube-system/kindnet-5n4vx"
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: I1101 10:35:30.450457    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/175e8939-23b3-40f6-8a2a-50617c44de73-kube-proxy\") pod \"kube-proxy-8fvnf\" (UID: \"175e8939-23b3-40f6-8a2a-50617c44de73\") " pod="kube-system/kube-proxy-8fvnf"
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: I1101 10:35:30.450473    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/175e8939-23b3-40f6-8a2a-50617c44de73-xtables-lock\") pod \"kube-proxy-8fvnf\" (UID: \"175e8939-23b3-40f6-8a2a-50617c44de73\") " pod="kube-system/kube-proxy-8fvnf"
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: I1101 10:35:30.450492    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vk5\" (UniqueName: \"kubernetes.io/projected/175e8939-23b3-40f6-8a2a-50617c44de73-kube-api-access-42vk5\") pod \"kube-proxy-8fvnf\" (UID: \"175e8939-23b3-40f6-8a2a-50617c44de73\") " pod="kube-system/kube-proxy-8fvnf"
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: I1101 10:35:30.632545    1999 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: W1101 10:35:30.700492    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/crio-ad39cfd2bf9266ddcba8a32facd079e98bc1d3e07003832ce043723907d15a0a WatchSource:0}: Error finding container ad39cfd2bf9266ddcba8a32facd079e98bc1d3e07003832ce043723907d15a0a: Status 404 returned error can't find the container with id ad39cfd2bf9266ddcba8a32facd079e98bc1d3e07003832ce043723907d15a0a
	Nov 01 10:35:30 no-preload-170467 kubelet[1999]: W1101 10:35:30.732670    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/crio-dfe8aa84fac90b6f06d4df035fb08e5ff1f31ef2be1b358f12120cdadffa993e WatchSource:0}: Error finding container dfe8aa84fac90b6f06d4df035fb08e5ff1f31ef2be1b358f12120cdadffa993e: Status 404 returned error can't find the container with id dfe8aa84fac90b6f06d4df035fb08e5ff1f31ef2be1b358f12120cdadffa993e
	Nov 01 10:35:34 no-preload-170467 kubelet[1999]: I1101 10:35:34.139410    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8fvnf" podStartSLOduration=4.139389128 podStartE2EDuration="4.139389128s" podCreationTimestamp="2025-11-01 10:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:31.109609911 +0000 UTC m=+6.457957813" watchObservedRunningTime="2025-11-01 10:35:34.139389128 +0000 UTC m=+9.487737030"
	Nov 01 10:35:34 no-preload-170467 kubelet[1999]: I1101 10:35:34.139736    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5n4vx" podStartSLOduration=1.29820877 podStartE2EDuration="4.13972868s" podCreationTimestamp="2025-11-01 10:35:30 +0000 UTC" firstStartedPulling="2025-11-01 10:35:30.703594379 +0000 UTC m=+6.051942273" lastFinishedPulling="2025-11-01 10:35:33.545114281 +0000 UTC m=+8.893462183" observedRunningTime="2025-11-01 10:35:34.13884469 +0000 UTC m=+9.487192592" watchObservedRunningTime="2025-11-01 10:35:34.13972868 +0000 UTC m=+9.488076566"
	Nov 01 10:35:43 no-preload-170467 kubelet[1999]: I1101 10:35:43.996783    1999 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:35:44 no-preload-170467 kubelet[1999]: I1101 10:35:44.159297    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-792zm\" (UniqueName: \"kubernetes.io/projected/a1ca4576-5984-4992-8da9-de18b36fda4e-kube-api-access-792zm\") pod \"coredns-66bc5c9577-f8tc4\" (UID: \"a1ca4576-5984-4992-8da9-de18b36fda4e\") " pod="kube-system/coredns-66bc5c9577-f8tc4"
	Nov 01 10:35:44 no-preload-170467 kubelet[1999]: I1101 10:35:44.159359    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1ca4576-5984-4992-8da9-de18b36fda4e-config-volume\") pod \"coredns-66bc5c9577-f8tc4\" (UID: \"a1ca4576-5984-4992-8da9-de18b36fda4e\") " pod="kube-system/coredns-66bc5c9577-f8tc4"
	Nov 01 10:35:44 no-preload-170467 kubelet[1999]: I1101 10:35:44.159387    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3a44a6d4-6c61-4a30-a833-7cd0b05ded40-tmp\") pod \"storage-provisioner\" (UID: \"3a44a6d4-6c61-4a30-a833-7cd0b05ded40\") " pod="kube-system/storage-provisioner"
	Nov 01 10:35:44 no-preload-170467 kubelet[1999]: I1101 10:35:44.159406    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrhh\" (UniqueName: \"kubernetes.io/projected/3a44a6d4-6c61-4a30-a833-7cd0b05ded40-kube-api-access-slrhh\") pod \"storage-provisioner\" (UID: \"3a44a6d4-6c61-4a30-a833-7cd0b05ded40\") " pod="kube-system/storage-provisioner"
	Nov 01 10:35:44 no-preload-170467 kubelet[1999]: W1101 10:35:44.360133    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/crio-b540d7b31decc7c9ba7cc88a3cdddebda17befc8cd0cb1c080bf723bf231d7a5 WatchSource:0}: Error finding container b540d7b31decc7c9ba7cc88a3cdddebda17befc8cd0cb1c080bf723bf231d7a5: Status 404 returned error can't find the container with id b540d7b31decc7c9ba7cc88a3cdddebda17befc8cd0cb1c080bf723bf231d7a5
	Nov 01 10:35:44 no-preload-170467 kubelet[1999]: W1101 10:35:44.397802    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/crio-401d92a90e78bdfee0142eb0aa6a3e8073901e8ae9f0a9038466febecd4358ac WatchSource:0}: Error finding container 401d92a90e78bdfee0142eb0aa6a3e8073901e8ae9f0a9038466febecd4358ac: Status 404 returned error can't find the container with id 401d92a90e78bdfee0142eb0aa6a3e8073901e8ae9f0a9038466febecd4358ac
	Nov 01 10:35:45 no-preload-170467 kubelet[1999]: I1101 10:35:45.206329    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.206307789 podStartE2EDuration="14.206307789s" podCreationTimestamp="2025-11-01 10:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:45.17731308 +0000 UTC m=+20.525660982" watchObservedRunningTime="2025-11-01 10:35:45.206307789 +0000 UTC m=+20.554655683"
	Nov 01 10:35:45 no-preload-170467 kubelet[1999]: I1101 10:35:45.239070    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-f8tc4" podStartSLOduration=16.239050745 podStartE2EDuration="16.239050745s" podCreationTimestamp="2025-11-01 10:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:45.209036002 +0000 UTC m=+20.557383920" watchObservedRunningTime="2025-11-01 10:35:45.239050745 +0000 UTC m=+20.587398638"
	Nov 01 10:35:47 no-preload-170467 kubelet[1999]: I1101 10:35:47.488045    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2ck2\" (UniqueName: \"kubernetes.io/projected/63a3bfba-fa06-422e-9226-ff614dc0a6b5-kube-api-access-h2ck2\") pod \"busybox\" (UID: \"63a3bfba-fa06-422e-9226-ff614dc0a6b5\") " pod="default/busybox"
	Nov 01 10:35:47 no-preload-170467 kubelet[1999]: W1101 10:35:47.659772    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/crio-f9e80301e4384fce7fc7d997221c8921f16b28880861beccacbc265c073f4502 WatchSource:0}: Error finding container f9e80301e4384fce7fc7d997221c8921f16b28880861beccacbc265c073f4502: Status 404 returned error can't find the container with id f9e80301e4384fce7fc7d997221c8921f16b28880861beccacbc265c073f4502
	
	
	==> storage-provisioner [1679dfd2dd54b719964db288a50692652cee2ef7d7054a00b2403a4de26e45a9] <==
	I1101 10:35:44.479175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:35:44.496401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:35:44.496515       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:35:44.509080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:44.518986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:35:44.519374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:35:44.519602       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-170467_1a1aa4bf-2c89-412d-8f3e-f61f40e6dbc5!
	I1101 10:35:44.520565       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43dd1037-8540-457f-804d-2dae616429c5", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-170467_1a1aa4bf-2c89-412d-8f3e-f61f40e6dbc5 became leader
	W1101 10:35:44.528365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:44.558858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:35:44.620471       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-170467_1a1aa4bf-2c89-412d-8f3e-f61f40e6dbc5!
	W1101 10:35:46.568731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:46.574045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:48.577026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:48.581880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:50.584538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:50.589102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:52.592709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:52.599348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:54.602849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:54.607361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:56.610592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:35:56.616549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-170467 -n no-preload-170467
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-170467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (335.948919ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:36:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-618070 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-618070 describe deploy/metrics-server -n kube-system: exit status 1 (116.561896ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-618070 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-618070
helpers_test.go:243: (dbg) docker inspect embed-certs-618070:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79",
	        "Created": "2025-11-01T10:34:43.970958066Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 466580,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:34:44.055921467Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/hosts",
	        "LogPath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79-json.log",
	        "Name": "/embed-certs-618070",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-618070:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-618070",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79",
	                "LowerDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-618070",
	                "Source": "/var/lib/docker/volumes/embed-certs-618070/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-618070",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-618070",
	                "name.minikube.sigs.k8s.io": "embed-certs-618070",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "12d85483505f917f58f9da74bd3a06d0f6fbea10b9f0da31972d16b362be1549",
	            "SandboxKey": "/var/run/docker/netns/12d85483505f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-618070": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:72:46:d1:d7:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2a9320fc77e2ab7eae746fc7f855e8764c40a6520ae3423667b1ef82153e035d",
	                    "EndpointID": "e471b46f5f3596d4b6191b4a64c2d8c8eeb974d0d42880b429936730c7759c49",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-618070",
	                        "5b2cdd451242"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-618070 -n embed-certs-618070
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-618070 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-618070 logs -n 25: (1.484772794s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p cilium-220636                                                                                                                                                                                                                              │ cilium-220636            │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:30 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:30 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p force-systemd-env-065424                                                                                                                                                                                                                   │ force-systemd-env-065424 │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p cert-options-082900 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ ssh     │ cert-options-082900 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ ssh     │ -p cert-options-082900 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p cert-options-082900                                                                                                                                                                                                                        │ cert-options-082900      │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │                     │
	│ stop    │ -p old-k8s-version-180313 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-180313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ image   │ old-k8s-version-180313 image list --format=json                                                                                                                                                                                               │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ pause   │ -p old-k8s-version-180313 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467        │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:35 UTC │
	│ delete  │ -p cert-expiration-459318                                                                                                                                                                                                                     │ cert-expiration-459318   │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-170467        │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-170467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-170467        │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-170467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-170467        │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467        │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-618070       │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:36:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:36:10.421570  471219 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:36:10.421760  471219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:10.421773  471219 out.go:374] Setting ErrFile to fd 2...
	I1101 10:36:10.421780  471219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:10.422162  471219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:36:10.422713  471219 out.go:368] Setting JSON to false
	I1101 10:36:10.423922  471219 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8320,"bootTime":1761985051,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:36:10.424005  471219 start.go:143] virtualization:  
	I1101 10:36:10.429051  471219 out.go:179] * [no-preload-170467] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:36:10.432231  471219 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:36:10.432280  471219 notify.go:221] Checking for updates...
	I1101 10:36:10.438846  471219 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:36:10.441845  471219 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:36:10.444698  471219 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:36:10.447742  471219 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:36:10.450616  471219 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:36:10.454278  471219 config.go:182] Loaded profile config "no-preload-170467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:10.454861  471219 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:36:10.483726  471219 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:36:10.483847  471219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:10.544399  471219 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:36:10.534628477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:36:10.544514  471219 docker.go:319] overlay module found
	I1101 10:36:10.549335  471219 out.go:179] * Using the docker driver based on existing profile
	I1101 10:36:10.552147  471219 start.go:309] selected driver: docker
	I1101 10:36:10.552168  471219 start.go:930] validating driver "docker" against &{Name:no-preload-170467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-170467 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:36:10.552268  471219 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:36:10.553018  471219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:10.627179  471219 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:36:10.6176745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:36:10.627576  471219 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:36:10.627614  471219 cni.go:84] Creating CNI manager for ""
	I1101 10:36:10.627672  471219 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:36:10.627715  471219 start.go:353] cluster config:
	{Name:no-preload-170467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-170467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:36:10.632792  471219 out.go:179] * Starting "no-preload-170467" primary control-plane node in "no-preload-170467" cluster
	I1101 10:36:10.635547  471219 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:36:10.638454  471219 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:36:10.641306  471219 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:36:10.641394  471219 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:36:10.641459  471219 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/config.json ...
	I1101 10:36:10.641750  471219 cache.go:107] acquiring lock: {Name:mk69ca0dab849b63b76844e5b8ce70975854cf36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:10.641847  471219 cache.go:115] /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 10:36:10.641862  471219 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.831µs
	I1101 10:36:10.641876  471219 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 10:36:10.641889  471219 cache.go:107] acquiring lock: {Name:mkdf5f606389005f0157c2de172bab3966512605 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:10.641933  471219 cache.go:115] /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 10:36:10.641943  471219 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 55.764µs
	I1101 10:36:10.641950  471219 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 10:36:10.641960  471219 cache.go:107] acquiring lock: {Name:mk526d588064a9632611efd8c5f29f4756ada6b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:10.641992  471219 cache.go:115] /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 10:36:10.641997  471219 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 38.294µs
	I1101 10:36:10.642008  471219 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 10:36:10.642017  471219 cache.go:107] acquiring lock: {Name:mke8866c580584e7755ebc19f7a7adbc0afec82c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:10.642048  471219 cache.go:115] /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 10:36:10.642058  471219 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 41.904µs
	I1101 10:36:10.642064  471219 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 10:36:10.642073  471219 cache.go:107] acquiring lock: {Name:mk89104a31eabba5cee4c660866714d7b945ad9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:10.642104  471219 cache.go:115] /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 10:36:10.642114  471219 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 41.453µs
	I1101 10:36:10.642120  471219 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 10:36:10.642129  471219 cache.go:107] acquiring lock: {Name:mkec8ad847f746f1815fa1935b018ce759592830 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:10.642158  471219 cache.go:115] /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1101 10:36:10.642164  471219 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.816µs
	I1101 10:36:10.642175  471219 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 10:36:10.642184  471219 cache.go:107] acquiring lock: {Name:mk10b32190b058543f03329b18de0a4e248caf3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:10.642213  471219 cache.go:115] /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 10:36:10.642222  471219 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.541µs
	I1101 10:36:10.642231  471219 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 10:36:10.642248  471219 cache.go:107] acquiring lock: {Name:mk06d24a47dd2ca13ad4aa75d26fe71094cee031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:10.642287  471219 cache.go:115] /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 10:36:10.642296  471219 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 48.977µs
	I1101 10:36:10.642302  471219 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 10:36:10.642308  471219 cache.go:87] Successfully saved all images to host disk.
	I1101 10:36:10.661250  471219 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:36:10.661275  471219 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:36:10.661331  471219 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:36:10.661362  471219 start.go:360] acquireMachinesLock for no-preload-170467: {Name:mk642e2a9ea5f4d82003c65686222cc72e6996eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:10.661431  471219 start.go:364] duration metric: took 47.32µs to acquireMachinesLock for "no-preload-170467"
	I1101 10:36:10.661455  471219 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:36:10.661464  471219 fix.go:54] fixHost starting: 
	I1101 10:36:10.661768  471219 cli_runner.go:164] Run: docker container inspect no-preload-170467 --format={{.State.Status}}
	I1101 10:36:10.678510  471219 fix.go:112] recreateIfNeeded on no-preload-170467: state=Stopped err=<nil>
	W1101 10:36:10.678541  471219 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:36:10.681863  471219 out.go:252] * Restarting existing docker container for "no-preload-170467" ...
	I1101 10:36:10.681972  471219 cli_runner.go:164] Run: docker start no-preload-170467
	I1101 10:36:10.958312  471219 cli_runner.go:164] Run: docker container inspect no-preload-170467 --format={{.State.Status}}
	I1101 10:36:10.984707  471219 kic.go:430] container "no-preload-170467" state is running.
	I1101 10:36:10.985100  471219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-170467
	I1101 10:36:11.007810  471219 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/config.json ...
	I1101 10:36:11.008059  471219 machine.go:94] provisionDockerMachine start ...
	I1101 10:36:11.008134  471219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:36:11.027055  471219 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:11.027602  471219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1101 10:36:11.027623  471219 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:36:11.028241  471219 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:36:14.185327  471219 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-170467
	
	I1101 10:36:14.185352  471219 ubuntu.go:182] provisioning hostname "no-preload-170467"
	I1101 10:36:14.185412  471219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:36:14.203323  471219 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:14.203639  471219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1101 10:36:14.203657  471219 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-170467 && echo "no-preload-170467" | sudo tee /etc/hostname
	I1101 10:36:14.367562  471219 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-170467
	
	I1101 10:36:14.367680  471219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:36:14.386336  471219 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:14.386651  471219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1101 10:36:14.386673  471219 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-170467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-170467/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-170467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:36:14.538135  471219 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:36:14.538160  471219 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:36:14.538179  471219 ubuntu.go:190] setting up certificates
	I1101 10:36:14.538198  471219 provision.go:84] configureAuth start
	I1101 10:36:14.538258  471219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-170467
	I1101 10:36:14.555920  471219 provision.go:143] copyHostCerts
	I1101 10:36:14.555994  471219 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:36:14.556004  471219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:36:14.556159  471219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:36:14.556272  471219 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:36:14.556278  471219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:36:14.556307  471219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:36:14.556371  471219 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:36:14.556375  471219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:36:14.556415  471219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:36:14.556477  471219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.no-preload-170467 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-170467]
	I1101 10:36:15.227675  471219 provision.go:177] copyRemoteCerts
	I1101 10:36:15.227752  471219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:36:15.227796  471219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:36:15.251178  471219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:36:15.358731  471219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:36:15.385437  471219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:36:15.412020  471219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	
	
	==> CRI-O <==
	Nov 01 10:36:04 embed-certs-618070 crio[839]: time="2025-11-01T10:36:04.58547726Z" level=info msg="Created container 7a261a846973c3cfcf0a9b58c2185513c7f1ecb80bfff5f006db88aa996dacd1: kube-system/coredns-66bc5c9577-6rf8b/coredns" id=e18429c9-2d71-4809-a55a-0a4f9c040c72 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:36:04 embed-certs-618070 crio[839]: time="2025-11-01T10:36:04.58885634Z" level=info msg="Starting container: 7a261a846973c3cfcf0a9b58c2185513c7f1ecb80bfff5f006db88aa996dacd1" id=3521a75e-593b-4a57-b347-3b17d55f8135 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:36:04 embed-certs-618070 crio[839]: time="2025-11-01T10:36:04.59707149Z" level=info msg="Started container" PID=1717 containerID=7a261a846973c3cfcf0a9b58c2185513c7f1ecb80bfff5f006db88aa996dacd1 description=kube-system/coredns-66bc5c9577-6rf8b/coredns id=3521a75e-593b-4a57-b347-3b17d55f8135 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d7641d99098456ebc3a06e3ed7a6e120bd51181b7e3a4d8d6f23ead9039ee33
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.509030483Z" level=info msg="Running pod sandbox: default/busybox/POD" id=cc482c52-ff6b-41d2-a79d-469d2c227337 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.509100548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.522267922Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7a96fe1cdf2e2019f1bd3c2167293a77882fcbc1a1fce73b7d1e2ab188c80c4a UID:f0e3261c-8c25-4d4b-a969-0f9698b1e429 NetNS:/var/run/netns/eac7f3b6-20df-4ee5-b321-c7fc5a3a950f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012bac0}] Aliases:map[]}"
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.522509773Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.532563829Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7a96fe1cdf2e2019f1bd3c2167293a77882fcbc1a1fce73b7d1e2ab188c80c4a UID:f0e3261c-8c25-4d4b-a969-0f9698b1e429 NetNS:/var/run/netns/eac7f3b6-20df-4ee5-b321-c7fc5a3a950f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012bac0}] Aliases:map[]}"
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.532761059Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.536642616Z" level=info msg="Ran pod sandbox 7a96fe1cdf2e2019f1bd3c2167293a77882fcbc1a1fce73b7d1e2ab188c80c4a with infra container: default/busybox/POD" id=cc482c52-ff6b-41d2-a79d-469d2c227337 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.5402121Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=238bddde-8266-48fa-8142-680c33fc8d1f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.540367664Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=238bddde-8266-48fa-8142-680c33fc8d1f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.54041692Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=238bddde-8266-48fa-8142-680c33fc8d1f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.541646738Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9f7ca93b-3948-4602-bba5-45a56c3841de name=/runtime.v1.ImageService/PullImage
	Nov 01 10:36:07 embed-certs-618070 crio[839]: time="2025-11-01T10:36:07.543338334Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:36:09 embed-certs-618070 crio[839]: time="2025-11-01T10:36:09.55161862Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9f7ca93b-3948-4602-bba5-45a56c3841de name=/runtime.v1.ImageService/PullImage
	Nov 01 10:36:09 embed-certs-618070 crio[839]: time="2025-11-01T10:36:09.552310818Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cd3de15c-857e-4c53-89bc-77d294a422ed name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:36:09 embed-certs-618070 crio[839]: time="2025-11-01T10:36:09.554215382Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d75c44a0-e0b7-4498-8055-793a75e71fc4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:36:09 embed-certs-618070 crio[839]: time="2025-11-01T10:36:09.560233367Z" level=info msg="Creating container: default/busybox/busybox" id=e7cbd47e-bc29-49a2-80a5-7f5ca37c18f6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:36:09 embed-certs-618070 crio[839]: time="2025-11-01T10:36:09.560362675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:36:09 embed-certs-618070 crio[839]: time="2025-11-01T10:36:09.566108063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:36:09 embed-certs-618070 crio[839]: time="2025-11-01T10:36:09.566574272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:36:09 embed-certs-618070 crio[839]: time="2025-11-01T10:36:09.581931524Z" level=info msg="Created container cbab598c9d44e0933ddc2dd885b69c44025fa0e0cafc3d4bd7270bcb3a4895c0: default/busybox/busybox" id=e7cbd47e-bc29-49a2-80a5-7f5ca37c18f6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:36:09 embed-certs-618070 crio[839]: time="2025-11-01T10:36:09.582920993Z" level=info msg="Starting container: cbab598c9d44e0933ddc2dd885b69c44025fa0e0cafc3d4bd7270bcb3a4895c0" id=ed71dd75-8bf1-44fd-8899-034b1f9e4648 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:36:09 embed-certs-618070 crio[839]: time="2025-11-01T10:36:09.585419341Z" level=info msg="Started container" PID=1775 containerID=cbab598c9d44e0933ddc2dd885b69c44025fa0e0cafc3d4bd7270bcb3a4895c0 description=default/busybox/busybox id=ed71dd75-8bf1-44fd-8899-034b1f9e4648 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a96fe1cdf2e2019f1bd3c2167293a77882fcbc1a1fce73b7d1e2ab188c80c4a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	cbab598c9d44e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   7a96fe1cdf2e2       busybox                                      default
	7a261a846973c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   3d7641d990984       coredns-66bc5c9577-6rf8b                     kube-system
	c66cb8900cc0e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   10a1919019323       storage-provisioner                          kube-system
	9db60cd88d576       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   8b48ab31520e0       kube-proxy-8lcjb                             kube-system
	e511f7975fba4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   e0bbc6b255f79       kindnet-df7sw                                kube-system
	eba9b726b14d9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   2b89740ae534a       kube-controller-manager-embed-certs-618070   kube-system
	c37742ef96784       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   d4fd8dcf0f674       kube-scheduler-embed-certs-618070            kube-system
	b0009293ac616       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   6b533371d778f       kube-apiserver-embed-certs-618070            kube-system
	9c4b20042bcd3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   b7d5a126c7dc8       etcd-embed-certs-618070                      kube-system
	
	
	==> coredns [7a261a846973c3cfcf0a9b58c2185513c7f1ecb80bfff5f006db88aa996dacd1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57860 - 1526 "HINFO IN 5991724751075732133.2743691687047218048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020569873s
	
	
	==> describe nodes <==
	Name:               embed-certs-618070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-618070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=embed-certs-618070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_35_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:35:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-618070
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:36:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:36:04 +0000   Sat, 01 Nov 2025 10:35:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:36:04 +0000   Sat, 01 Nov 2025 10:35:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:36:04 +0000   Sat, 01 Nov 2025 10:35:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:36:04 +0000   Sat, 01 Nov 2025 10:36:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-618070
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                5139744f-7550-4fc5-8cfe-6439f928869a
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-6rf8b                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-embed-certs-618070                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-df7sw                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-618070             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-618070    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-8lcjb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-618070             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Warning  CgroupV1                 73s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node embed-certs-618070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node embed-certs-618070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 73s)  kubelet          Node embed-certs-618070 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-618070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-618070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-618070 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-618070 event: Registered Node embed-certs-618070 in Controller
	  Normal   NodeReady                13s                kubelet          Node embed-certs-618070 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9c4b20042bcd3a9062c1cbe0b2fa0f86aa7273606ed8830ea326b31a7742ffc6] <==
	{"level":"warn","ts":"2025-11-01T10:35:11.461284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.490241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.530134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.564178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.574978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.610075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.640336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.666837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.735179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.739009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.763556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.793083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.820234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.846928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.866794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.914925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.950277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:11.965900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:12.000927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:12.035561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:12.069320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:12.098989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:12.132118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:12.169667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:35:12.205122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35990","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:17 up  2:18,  0 user,  load average: 4.41, 4.27, 3.20
	Linux embed-certs-618070 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e511f7975fba44097ca12713e06bc4d02d531674fc9d9777529766aea0e67907] <==
	I1101 10:35:23.719613       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:35:23.719842       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:35:23.719975       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:35:23.719986       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:35:23.719995       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:35:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:35:23.932178       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:35:23.932199       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:35:23.932208       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:35:23.932554       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:35:53.932217       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:35:53.932217       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:35:53.932497       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:35:53.933517       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:35:55.532745       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:35:55.532780       1 metrics.go:72] Registering metrics
	I1101 10:35:55.532850       1 controller.go:711] "Syncing nftables rules"
	I1101 10:36:03.937828       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:36:03.937907       1 main.go:301] handling current node
	I1101 10:36:13.931550       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:36:13.931588       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b0009293ac6167a1b1b07877900c930b2f6ab3a735dd4ac36b1254918985b988] <==
	E1101 10:35:13.982383       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1101 10:35:14.009922       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:35:14.024611       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:14.028302       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:35:14.050189       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:35:14.053070       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:14.186291       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:35:14.465178       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:35:14.474075       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:35:14.474108       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:35:15.663864       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:35:15.735878       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:35:15.814381       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:35:15.826398       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 10:35:15.827608       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:35:15.832706       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:35:16.737107       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:35:17.126534       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:35:17.173093       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:35:17.240971       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:35:22.343822       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:22.367414       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:22.534384       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 10:35:22.686780       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1101 10:36:15.078699       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:41138: use of closed network connection
	
	
	==> kube-controller-manager [eba9b726b14d9e7479792292a199c4fd5813613565077e2402823d1932204547] <==
	I1101 10:35:21.887290       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:35:21.887418       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:35:21.887427       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:35:21.887433       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:35:21.887441       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:35:21.887448       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:35:21.899476       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:35:21.899711       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:35:21.891334       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:35:21.891980       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-618070" podCIDRs=["10.244.0.0/24"]
	I1101 10:35:21.886059       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:35:21.886087       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:35:21.886095       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:35:21.918559       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:35:21.938806       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:35:21.938854       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:35:21.945529       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:35:21.945617       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:35:21.951605       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:35:21.951636       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:35:21.953311       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:35:22.026396       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:35:22.026502       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:35:22.026533       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:36:06.894442       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9db60cd88d5767960940b25ae4a74f7822ed8d4ab93f1f2a0aaaaa50da40c329] <==
	I1101 10:35:23.758444       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:35:23.871970       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:35:23.984095       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:35:23.984148       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:35:23.984226       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:35:24.137998       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:35:24.143425       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:35:24.155903       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:35:24.158546       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:35:24.158573       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:35:24.165336       1 config.go:200] "Starting service config controller"
	I1101 10:35:24.165357       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:35:24.165380       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:35:24.165384       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:35:24.165397       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:35:24.165400       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:35:24.166386       1 config.go:309] "Starting node config controller"
	I1101 10:35:24.166396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:35:24.166403       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:35:24.265459       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:35:24.265496       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:35:24.265546       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c37742ef9678472e88ecb7b78bf47a0ad273ad4280dd3f7f77965c10d4c292e1] <==
	E1101 10:35:13.975864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:35:13.975902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:35:13.985584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:35:13.985660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:35:13.987114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:35:13.987195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:35:13.987259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:35:13.987320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:35:13.987378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:35:13.987440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:35:13.987570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:35:13.987638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:35:14.883452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:35:14.889620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:35:14.994012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:35:15.018097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:35:15.040581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:35:15.093519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:35:15.194473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:35:15.207266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:35:15.218151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:35:15.246335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:35:15.289964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:35:15.314086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 10:35:18.149833       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:35:21 embed-certs-618070 kubelet[1289]: I1101 10:35:21.870748    1289 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:35:21 embed-certs-618070 kubelet[1289]: I1101 10:35:21.873915    1289 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:35:22 embed-certs-618070 kubelet[1289]: I1101 10:35:22.718283    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/268ab883-4df6-47bd-8d25-523991f7a2d0-lib-modules\") pod \"kindnet-df7sw\" (UID: \"268ab883-4df6-47bd-8d25-523991f7a2d0\") " pod="kube-system/kindnet-df7sw"
	Nov 01 10:35:22 embed-certs-618070 kubelet[1289]: I1101 10:35:22.718477    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc87q\" (UniqueName: \"kubernetes.io/projected/d2f8c1d5-fad6-4e84-af61-5152f65cf2bb-kube-api-access-lc87q\") pod \"kube-proxy-8lcjb\" (UID: \"d2f8c1d5-fad6-4e84-af61-5152f65cf2bb\") " pod="kube-system/kube-proxy-8lcjb"
	Nov 01 10:35:22 embed-certs-618070 kubelet[1289]: I1101 10:35:22.718579    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/268ab883-4df6-47bd-8d25-523991f7a2d0-xtables-lock\") pod \"kindnet-df7sw\" (UID: \"268ab883-4df6-47bd-8d25-523991f7a2d0\") " pod="kube-system/kindnet-df7sw"
	Nov 01 10:35:22 embed-certs-618070 kubelet[1289]: I1101 10:35:22.718669    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58gr8\" (UniqueName: \"kubernetes.io/projected/268ab883-4df6-47bd-8d25-523991f7a2d0-kube-api-access-58gr8\") pod \"kindnet-df7sw\" (UID: \"268ab883-4df6-47bd-8d25-523991f7a2d0\") " pod="kube-system/kindnet-df7sw"
	Nov 01 10:35:22 embed-certs-618070 kubelet[1289]: I1101 10:35:22.718754    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2f8c1d5-fad6-4e84-af61-5152f65cf2bb-kube-proxy\") pod \"kube-proxy-8lcjb\" (UID: \"d2f8c1d5-fad6-4e84-af61-5152f65cf2bb\") " pod="kube-system/kube-proxy-8lcjb"
	Nov 01 10:35:22 embed-certs-618070 kubelet[1289]: I1101 10:35:22.718844    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2f8c1d5-fad6-4e84-af61-5152f65cf2bb-xtables-lock\") pod \"kube-proxy-8lcjb\" (UID: \"d2f8c1d5-fad6-4e84-af61-5152f65cf2bb\") " pod="kube-system/kube-proxy-8lcjb"
	Nov 01 10:35:22 embed-certs-618070 kubelet[1289]: I1101 10:35:22.718921    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2f8c1d5-fad6-4e84-af61-5152f65cf2bb-lib-modules\") pod \"kube-proxy-8lcjb\" (UID: \"d2f8c1d5-fad6-4e84-af61-5152f65cf2bb\") " pod="kube-system/kube-proxy-8lcjb"
	Nov 01 10:35:22 embed-certs-618070 kubelet[1289]: I1101 10:35:22.718993    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/268ab883-4df6-47bd-8d25-523991f7a2d0-cni-cfg\") pod \"kindnet-df7sw\" (UID: \"268ab883-4df6-47bd-8d25-523991f7a2d0\") " pod="kube-system/kindnet-df7sw"
	Nov 01 10:35:23 embed-certs-618070 kubelet[1289]: I1101 10:35:23.021872    1289 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:35:23 embed-certs-618070 kubelet[1289]: W1101 10:35:23.310439    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/crio-e0bbc6b255f79d137fa4b432c5e0b168e979c98e33e5b7753cdc4b50c555cb66 WatchSource:0}: Error finding container e0bbc6b255f79d137fa4b432c5e0b168e979c98e33e5b7753cdc4b50c555cb66: Status 404 returned error can't find the container with id e0bbc6b255f79d137fa4b432c5e0b168e979c98e33e5b7753cdc4b50c555cb66
	Nov 01 10:35:23 embed-certs-618070 kubelet[1289]: W1101 10:35:23.385221    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/crio-8b48ab31520e076091289e6bc8067114125627665aa4ef0ef3d20839431e8ea7 WatchSource:0}: Error finding container 8b48ab31520e076091289e6bc8067114125627665aa4ef0ef3d20839431e8ea7: Status 404 returned error can't find the container with id 8b48ab31520e076091289e6bc8067114125627665aa4ef0ef3d20839431e8ea7
	Nov 01 10:35:24 embed-certs-618070 kubelet[1289]: I1101 10:35:24.099282    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-df7sw" podStartSLOduration=2.099261782 podStartE2EDuration="2.099261782s" podCreationTimestamp="2025-11-01 10:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:23.690357251 +0000 UTC m=+6.701794194" watchObservedRunningTime="2025-11-01 10:35:24.099261782 +0000 UTC m=+7.110698716"
	Nov 01 10:35:25 embed-certs-618070 kubelet[1289]: I1101 10:35:25.909801    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8lcjb" podStartSLOduration=3.909780387 podStartE2EDuration="3.909780387s" podCreationTimestamp="2025-11-01 10:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:24.723472481 +0000 UTC m=+7.734909407" watchObservedRunningTime="2025-11-01 10:35:25.909780387 +0000 UTC m=+8.921217321"
	Nov 01 10:36:04 embed-certs-618070 kubelet[1289]: I1101 10:36:04.160658    1289 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:36:04 embed-certs-618070 kubelet[1289]: I1101 10:36:04.359945    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3db12ed9-30d4-45e6-9c67-8fa581fe4652-config-volume\") pod \"coredns-66bc5c9577-6rf8b\" (UID: \"3db12ed9-30d4-45e6-9c67-8fa581fe4652\") " pod="kube-system/coredns-66bc5c9577-6rf8b"
	Nov 01 10:36:04 embed-certs-618070 kubelet[1289]: I1101 10:36:04.360003    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/31534ded-cd9f-410b-abe9-f1992dd225bc-tmp\") pod \"storage-provisioner\" (UID: \"31534ded-cd9f-410b-abe9-f1992dd225bc\") " pod="kube-system/storage-provisioner"
	Nov 01 10:36:04 embed-certs-618070 kubelet[1289]: I1101 10:36:04.360037    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgtqh\" (UniqueName: \"kubernetes.io/projected/31534ded-cd9f-410b-abe9-f1992dd225bc-kube-api-access-mgtqh\") pod \"storage-provisioner\" (UID: \"31534ded-cd9f-410b-abe9-f1992dd225bc\") " pod="kube-system/storage-provisioner"
	Nov 01 10:36:04 embed-certs-618070 kubelet[1289]: I1101 10:36:04.360074    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kknj\" (UniqueName: \"kubernetes.io/projected/3db12ed9-30d4-45e6-9c67-8fa581fe4652-kube-api-access-4kknj\") pod \"coredns-66bc5c9577-6rf8b\" (UID: \"3db12ed9-30d4-45e6-9c67-8fa581fe4652\") " pod="kube-system/coredns-66bc5c9577-6rf8b"
	Nov 01 10:36:04 embed-certs-618070 kubelet[1289]: W1101 10:36:04.525407    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/crio-10a191901932325210726cb2a29fb7893ef9c380c33c10760a88f82d91fd39fe WatchSource:0}: Error finding container 10a191901932325210726cb2a29fb7893ef9c380c33c10760a88f82d91fd39fe: Status 404 returned error can't find the container with id 10a191901932325210726cb2a29fb7893ef9c380c33c10760a88f82d91fd39fe
	Nov 01 10:36:04 embed-certs-618070 kubelet[1289]: W1101 10:36:04.539621    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/crio-3d7641d99098456ebc3a06e3ed7a6e120bd51181b7e3a4d8d6f23ead9039ee33 WatchSource:0}: Error finding container 3d7641d99098456ebc3a06e3ed7a6e120bd51181b7e3a4d8d6f23ead9039ee33: Status 404 returned error can't find the container with id 3d7641d99098456ebc3a06e3ed7a6e120bd51181b7e3a4d8d6f23ead9039ee33
	Nov 01 10:36:04 embed-certs-618070 kubelet[1289]: I1101 10:36:04.788554    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.788534021 podStartE2EDuration="40.788534021s" podCreationTimestamp="2025-11-01 10:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:36:04.774084832 +0000 UTC m=+47.785521766" watchObservedRunningTime="2025-11-01 10:36:04.788534021 +0000 UTC m=+47.799970947"
	Nov 01 10:36:06 embed-certs-618070 kubelet[1289]: I1101 10:36:06.898509    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6rf8b" podStartSLOduration=43.898488953 podStartE2EDuration="43.898488953s" podCreationTimestamp="2025-11-01 10:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:36:04.788994503 +0000 UTC m=+47.800431437" watchObservedRunningTime="2025-11-01 10:36:06.898488953 +0000 UTC m=+49.909925887"
	Nov 01 10:36:07 embed-certs-618070 kubelet[1289]: I1101 10:36:07.091216    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlzr2\" (UniqueName: \"kubernetes.io/projected/f0e3261c-8c25-4d4b-a969-0f9698b1e429-kube-api-access-jlzr2\") pod \"busybox\" (UID: \"f0e3261c-8c25-4d4b-a969-0f9698b1e429\") " pod="default/busybox"
	
	
	==> storage-provisioner [c66cb8900cc0e452281c35734d041ad1ae1db887f4b49942656983c13034e2c4] <==
	I1101 10:36:04.600372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:36:04.624663       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:36:04.624808       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:36:04.627891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:04.637312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:36:04.637707       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:36:04.640560       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-618070_be8a803b-62c5-4d99-a9e3-1c2fb6952405!
	W1101 10:36:04.643471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:36:04.646302       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba9fae06-5ee5-464b-964a-84fa8bc80eb0", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-618070_be8a803b-62c5-4d99-a9e3-1c2fb6952405 became leader
	W1101 10:36:04.663624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:36:04.741554       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-618070_be8a803b-62c5-4d99-a9e3-1c2fb6952405!
	W1101 10:36:06.667429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:06.692840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:08.696485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:08.701379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:10.704946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:10.710062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:12.713883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:12.721307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:14.725560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:14.731363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:16.748522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:36:16.757606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-618070 -n embed-certs-618070
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-618070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-170467 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-170467 --alsologtostderr -v=1: exit status 80 (2.117854396s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-170467 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:37:15.832364  476224 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:37:15.832541  476224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:37:15.832555  476224 out.go:374] Setting ErrFile to fd 2...
	I1101 10:37:15.832560  476224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:37:15.832844  476224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:37:15.833125  476224 out.go:368] Setting JSON to false
	I1101 10:37:15.833152  476224 mustload.go:66] Loading cluster: no-preload-170467
	I1101 10:37:15.833549  476224 config.go:182] Loaded profile config "no-preload-170467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:37:15.834134  476224 cli_runner.go:164] Run: docker container inspect no-preload-170467 --format={{.State.Status}}
	I1101 10:37:15.855760  476224 host.go:66] Checking if "no-preload-170467" exists ...
	I1101 10:37:15.856091  476224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:37:15.941197  476224 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:37:15.930959458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:37:15.942085  476224 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-170467 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:37:15.945549  476224 out.go:179] * Pausing node no-preload-170467 ... 
	I1101 10:37:15.948657  476224 host.go:66] Checking if "no-preload-170467" exists ...
	I1101 10:37:15.949010  476224 ssh_runner.go:195] Run: systemctl --version
	I1101 10:37:15.949076  476224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-170467
	I1101 10:37:15.968298  476224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/no-preload-170467/id_rsa Username:docker}
	I1101 10:37:16.084918  476224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:37:16.101743  476224 pause.go:52] kubelet running: true
	I1101 10:37:16.101808  476224 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:37:16.361307  476224 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:37:16.361394  476224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:37:16.445591  476224 cri.go:89] found id: "ae42d8e4ab16080f670c8ff2b53493af12b192a15fe23571a2dd1102d8b6c641"
	I1101 10:37:16.445616  476224 cri.go:89] found id: "2542d184d846f8559dc4739455bb2da603e70043dd6f539aa02dd36184e7f96f"
	I1101 10:37:16.445622  476224 cri.go:89] found id: "aa0cefe2b636bf67720efda3df850d0b038d67c5882db88b2275ba2af1d5ad01"
	I1101 10:37:16.445626  476224 cri.go:89] found id: "2d334976880a328aa72139d2bd78a22dd5ca66a3c58c97147961c3a55f5dfdb7"
	I1101 10:37:16.445632  476224 cri.go:89] found id: "e34fa2d2c95db19a7ddd0638a9f24ddf5abba06508773a2f5a2a7fe781219862"
	I1101 10:37:16.445639  476224 cri.go:89] found id: "92666d4844f6b3588b8743cdd07e1886645c89486d34a6c9f834dbddcf36cca7"
	I1101 10:37:16.445643  476224 cri.go:89] found id: "463cb4a73c0ec75555794f8ae2b5327835e1820527eae4f732bfe7662c895e04"
	I1101 10:37:16.445646  476224 cri.go:89] found id: "a32e2d3237a2af02c8bb26acabd5b253db72f624e204b7da7d0f30cd2b961eda"
	I1101 10:37:16.445649  476224 cri.go:89] found id: "dfce63142ccedebc3c9346d9e3d23366f79ba77d408a006db59c49b63f4fc7c0"
	I1101 10:37:16.445655  476224 cri.go:89] found id: "e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd"
	I1101 10:37:16.445658  476224 cri.go:89] found id: "4978e3acc12ae303ca549d64a786644a09b443cca018b949c9ec3b02ef2b8b0b"
	I1101 10:37:16.445661  476224 cri.go:89] found id: ""
	I1101 10:37:16.445748  476224 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:37:16.457246  476224 retry.go:31] will retry after 281.945574ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:37:16Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:37:16.739733  476224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:37:16.753304  476224 pause.go:52] kubelet running: false
	I1101 10:37:16.753405  476224 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:37:16.939220  476224 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:37:16.939318  476224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:37:17.011241  476224 cri.go:89] found id: "ae42d8e4ab16080f670c8ff2b53493af12b192a15fe23571a2dd1102d8b6c641"
	I1101 10:37:17.011263  476224 cri.go:89] found id: "2542d184d846f8559dc4739455bb2da603e70043dd6f539aa02dd36184e7f96f"
	I1101 10:37:17.011268  476224 cri.go:89] found id: "aa0cefe2b636bf67720efda3df850d0b038d67c5882db88b2275ba2af1d5ad01"
	I1101 10:37:17.011272  476224 cri.go:89] found id: "2d334976880a328aa72139d2bd78a22dd5ca66a3c58c97147961c3a55f5dfdb7"
	I1101 10:37:17.011276  476224 cri.go:89] found id: "e34fa2d2c95db19a7ddd0638a9f24ddf5abba06508773a2f5a2a7fe781219862"
	I1101 10:37:17.011279  476224 cri.go:89] found id: "92666d4844f6b3588b8743cdd07e1886645c89486d34a6c9f834dbddcf36cca7"
	I1101 10:37:17.011283  476224 cri.go:89] found id: "463cb4a73c0ec75555794f8ae2b5327835e1820527eae4f732bfe7662c895e04"
	I1101 10:37:17.011286  476224 cri.go:89] found id: "a32e2d3237a2af02c8bb26acabd5b253db72f624e204b7da7d0f30cd2b961eda"
	I1101 10:37:17.011289  476224 cri.go:89] found id: "dfce63142ccedebc3c9346d9e3d23366f79ba77d408a006db59c49b63f4fc7c0"
	I1101 10:37:17.011296  476224 cri.go:89] found id: "e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd"
	I1101 10:37:17.011299  476224 cri.go:89] found id: "4978e3acc12ae303ca549d64a786644a09b443cca018b949c9ec3b02ef2b8b0b"
	I1101 10:37:17.011302  476224 cri.go:89] found id: ""
	I1101 10:37:17.011357  476224 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:37:17.024547  476224 retry.go:31] will retry after 543.313793ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:37:17Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:37:17.568132  476224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:37:17.582262  476224 pause.go:52] kubelet running: false
	I1101 10:37:17.582386  476224 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:37:17.772143  476224 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:37:17.772275  476224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:37:17.846109  476224 cri.go:89] found id: "ae42d8e4ab16080f670c8ff2b53493af12b192a15fe23571a2dd1102d8b6c641"
	I1101 10:37:17.846181  476224 cri.go:89] found id: "2542d184d846f8559dc4739455bb2da603e70043dd6f539aa02dd36184e7f96f"
	I1101 10:37:17.846194  476224 cri.go:89] found id: "aa0cefe2b636bf67720efda3df850d0b038d67c5882db88b2275ba2af1d5ad01"
	I1101 10:37:17.846199  476224 cri.go:89] found id: "2d334976880a328aa72139d2bd78a22dd5ca66a3c58c97147961c3a55f5dfdb7"
	I1101 10:37:17.846202  476224 cri.go:89] found id: "e34fa2d2c95db19a7ddd0638a9f24ddf5abba06508773a2f5a2a7fe781219862"
	I1101 10:37:17.846206  476224 cri.go:89] found id: "92666d4844f6b3588b8743cdd07e1886645c89486d34a6c9f834dbddcf36cca7"
	I1101 10:37:17.846209  476224 cri.go:89] found id: "463cb4a73c0ec75555794f8ae2b5327835e1820527eae4f732bfe7662c895e04"
	I1101 10:37:17.846212  476224 cri.go:89] found id: "a32e2d3237a2af02c8bb26acabd5b253db72f624e204b7da7d0f30cd2b961eda"
	I1101 10:37:17.846215  476224 cri.go:89] found id: "dfce63142ccedebc3c9346d9e3d23366f79ba77d408a006db59c49b63f4fc7c0"
	I1101 10:37:17.846221  476224 cri.go:89] found id: "e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd"
	I1101 10:37:17.846225  476224 cri.go:89] found id: "4978e3acc12ae303ca549d64a786644a09b443cca018b949c9ec3b02ef2b8b0b"
	I1101 10:37:17.846228  476224 cri.go:89] found id: ""
	I1101 10:37:17.846288  476224 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:37:17.861487  476224 out.go:203] 
	W1101 10:37:17.864313  476224 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:37:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:37:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:37:17.864343  476224 out.go:285] * 
	* 
	W1101 10:37:17.871294  476224 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:37:17.874473  476224 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-170467 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-170467
helpers_test.go:243: (dbg) docker inspect no-preload-170467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174",
	        "Created": "2025-11-01T10:34:34.605945811Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471345,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:36:10.716147667Z",
	            "FinishedAt": "2025-11-01T10:36:09.841873992Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/hostname",
	        "HostsPath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/hosts",
	        "LogPath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174-json.log",
	        "Name": "/no-preload-170467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-170467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-170467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174",
	                "LowerDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-170467",
	                "Source": "/var/lib/docker/volumes/no-preload-170467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-170467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-170467",
	                "name.minikube.sigs.k8s.io": "no-preload-170467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "680dbf312615a8da1d02c8a3bb317a19977cc836bd6d2ab4e37fc4d486ee6114",
	            "SandboxKey": "/var/run/docker/netns/680dbf312615",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-170467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:3d:90:fa:8c:85",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a76db7f9c768e30abf0f10f25f36c5fa2518f946ae0f8436a94ea13f0365a6d0",
	                    "EndpointID": "09ad27a8e9d90d5c9b5ddcb7b1fcb405a06aabe1b0960ac7012fe9a343b2d6f9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-170467",
	                        "496a258eae10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-170467 -n no-preload-170467
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-170467 -n no-preload-170467: exit status 2 (392.18735ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-170467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-170467 logs -n 25: (1.355570127s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-082900 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-082900    │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p cert-options-082900                                                                                                                                                                                                                        │ cert-options-082900    │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │                     │
	│ stop    │ -p old-k8s-version-180313 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-180313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-459318 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ image   │ old-k8s-version-180313 image list --format=json                                                                                                                                                                                               │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ pause   │ -p old-k8s-version-180313 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:35 UTC │
	│ delete  │ -p cert-expiration-459318                                                                                                                                                                                                                     │ cert-expiration-459318 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070     │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-170467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-170467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-618070     │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ stop    │ -p embed-certs-618070 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-618070     │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-618070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-618070     │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070     │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ image   │ no-preload-170467 image list --format=json                                                                                                                                                                                                    │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p no-preload-170467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:36:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:36:30.935812  473779 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:36:30.936457  473779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:30.936489  473779 out.go:374] Setting ErrFile to fd 2...
	I1101 10:36:30.936507  473779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:30.936791  473779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:36:30.937187  473779 out.go:368] Setting JSON to false
	I1101 10:36:30.938290  473779 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8340,"bootTime":1761985051,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:36:30.938385  473779 start.go:143] virtualization:  
	I1101 10:36:30.943616  473779 out.go:179] * [embed-certs-618070] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:36:30.946764  473779 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:36:30.946920  473779 notify.go:221] Checking for updates...
	I1101 10:36:30.953056  473779 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:36:30.956062  473779 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:36:30.958994  473779 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:36:30.961861  473779 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:36:30.964783  473779 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:36:30.968330  473779 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:30.968936  473779 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:36:30.998419  473779 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:36:30.998568  473779 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:31.138730  473779 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:36:31.126505613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:36:31.138841  473779 docker.go:319] overlay module found
	I1101 10:36:31.142052  473779 out.go:179] * Using the docker driver based on existing profile
	I1101 10:36:31.144919  473779 start.go:309] selected driver: docker
	I1101 10:36:31.144944  473779 start.go:930] validating driver "docker" against &{Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:36:31.145048  473779 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:36:31.145857  473779 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:31.241635  473779 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:36:31.231725628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:36:31.242016  473779 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:36:31.242043  473779 cni.go:84] Creating CNI manager for ""
	I1101 10:36:31.242117  473779 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:36:31.242165  473779 start.go:353] cluster config:
	{Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:36:31.248080  473779 out.go:179] * Starting "embed-certs-618070" primary control-plane node in "embed-certs-618070" cluster
	I1101 10:36:31.250957  473779 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:36:31.254123  473779 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:36:31.257152  473779 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:36:31.257215  473779 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:36:31.257231  473779 cache.go:59] Caching tarball of preloaded images
	I1101 10:36:31.257336  473779 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:36:31.257353  473779 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:36:31.257457  473779 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/config.json ...
	I1101 10:36:31.257805  473779 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:36:31.286738  473779 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:36:31.286765  473779 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:36:31.286790  473779 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:36:31.286812  473779 start.go:360] acquireMachinesLock for embed-certs-618070: {Name:mk13307b6a73c01f486aea48ffd4761ad677dd7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:31.286868  473779 start.go:364] duration metric: took 33.929µs to acquireMachinesLock for "embed-certs-618070"
	I1101 10:36:31.286892  473779 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:36:31.286898  473779 fix.go:54] fixHost starting: 
	I1101 10:36:31.287146  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:31.306935  473779 fix.go:112] recreateIfNeeded on embed-certs-618070: state=Stopped err=<nil>
	W1101 10:36:31.306990  473779 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:36:31.043213  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:33.537161  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	I1101 10:36:31.310314  473779 out.go:252] * Restarting existing docker container for "embed-certs-618070" ...
	I1101 10:36:31.310394  473779 cli_runner.go:164] Run: docker start embed-certs-618070
	I1101 10:36:31.662453  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:31.682390  473779 kic.go:430] container "embed-certs-618070" state is running.
	I1101 10:36:31.684207  473779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-618070
	I1101 10:36:31.712925  473779 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/config.json ...
	I1101 10:36:31.713152  473779 machine.go:94] provisionDockerMachine start ...
	I1101 10:36:31.713259  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:31.745489  473779 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:31.745893  473779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1101 10:36:31.745907  473779 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:36:31.746761  473779 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33482->127.0.0.1:33435: read: connection reset by peer
	I1101 10:36:34.923346  473779 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-618070
	
	I1101 10:36:34.923427  473779 ubuntu.go:182] provisioning hostname "embed-certs-618070"
	I1101 10:36:34.923524  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:34.948949  473779 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:34.949257  473779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1101 10:36:34.949277  473779 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-618070 && echo "embed-certs-618070" | sudo tee /etc/hostname
	I1101 10:36:35.131904  473779 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-618070
	
	I1101 10:36:35.132098  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:35.159298  473779 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:35.159617  473779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1101 10:36:35.159639  473779 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-618070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-618070/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-618070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:36:35.331378  473779 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:36:35.331461  473779 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:36:35.331506  473779 ubuntu.go:190] setting up certificates
	I1101 10:36:35.331558  473779 provision.go:84] configureAuth start
	I1101 10:36:35.331653  473779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-618070
	I1101 10:36:35.353626  473779 provision.go:143] copyHostCerts
	I1101 10:36:35.353724  473779 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:36:35.353741  473779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:36:35.353818  473779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:36:35.353918  473779 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:36:35.353923  473779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:36:35.353948  473779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:36:35.353996  473779 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:36:35.354001  473779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:36:35.354030  473779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:36:35.354074  473779 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.embed-certs-618070 san=[127.0.0.1 192.168.85.2 embed-certs-618070 localhost minikube]
	I1101 10:36:35.476490  473779 provision.go:177] copyRemoteCerts
	I1101 10:36:35.476609  473779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:36:35.476685  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:35.496953  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:35.616752  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:36:35.643122  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:36:35.668409  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:36:35.695882  473779 provision.go:87] duration metric: took 364.287717ms to configureAuth
	I1101 10:36:35.695913  473779 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:36:35.696112  473779 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:35.696236  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:35.735545  473779 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:35.735856  473779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1101 10:36:35.735871  473779 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:36:36.229037  473779 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:36:36.229073  473779 machine.go:97] duration metric: took 4.515909315s to provisionDockerMachine
	I1101 10:36:36.229085  473779 start.go:293] postStartSetup for "embed-certs-618070" (driver="docker")
	I1101 10:36:36.229096  473779 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:36:36.229165  473779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:36:36.229261  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:36.255696  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:36.384085  473779 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:36:36.389336  473779 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:36:36.389367  473779 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:36:36.389378  473779 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:36:36.389428  473779 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:36:36.389515  473779 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:36:36.389623  473779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:36:36.404740  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:36:36.436077  473779 start.go:296] duration metric: took 206.976627ms for postStartSetup
	I1101 10:36:36.436164  473779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:36:36.436215  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:36.456938  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:36.563457  473779 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:36:36.568448  473779 fix.go:56] duration metric: took 5.281542706s for fixHost
	I1101 10:36:36.568475  473779 start.go:83] releasing machines lock for "embed-certs-618070", held for 5.281593029s
	I1101 10:36:36.568562  473779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-618070
	I1101 10:36:36.589421  473779 ssh_runner.go:195] Run: cat /version.json
	I1101 10:36:36.589485  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:36.589738  473779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:36:36.589794  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:36.629764  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:36.632447  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:36.754380  473779 ssh_runner.go:195] Run: systemctl --version
	I1101 10:36:36.865051  473779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:36:36.937644  473779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:36:36.942784  473779 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:36:36.942899  473779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:36:36.952381  473779 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:36:36.952455  473779 start.go:496] detecting cgroup driver to use...
	I1101 10:36:36.952503  473779 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:36:36.952585  473779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:36:36.970811  473779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:36:36.985202  473779 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:36:36.985322  473779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:36:37.003611  473779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:36:37.025138  473779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:36:37.192365  473779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:36:37.348065  473779 docker.go:234] disabling docker service ...
	I1101 10:36:37.348210  473779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:36:37.366325  473779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:36:37.381322  473779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:36:37.558866  473779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:36:37.752262  473779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:36:37.767628  473779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:36:37.791049  473779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:36:37.791190  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.803022  473779 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:36:37.803145  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.812833  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.822181  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.831390  473779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:36:37.840564  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.851123  473779 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.862982  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.879042  473779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:36:37.894326  473779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:36:37.903295  473779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:36:38.116393  473779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:36:38.593018  473779 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:36:38.593140  473779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:36:38.604544  473779 start.go:564] Will wait 60s for crictl version
	I1101 10:36:38.604691  473779 ssh_runner.go:195] Run: which crictl
	I1101 10:36:38.609154  473779 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:36:38.656838  473779 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:36:38.657023  473779 ssh_runner.go:195] Run: crio --version
	I1101 10:36:38.693032  473779 ssh_runner.go:195] Run: crio --version
	I1101 10:36:38.739264  473779 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1101 10:36:35.538879  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:37.538939  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:39.540444  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	I1101 10:36:38.742573  473779 cli_runner.go:164] Run: docker network inspect embed-certs-618070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:36:38.768457  473779 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:36:38.774714  473779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:36:38.790994  473779 kubeadm.go:884] updating cluster {Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:36:38.791117  473779 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:36:38.791168  473779 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:36:38.848046  473779 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:36:38.848071  473779 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:36:38.848128  473779 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:36:38.876333  473779 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:36:38.876358  473779 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:36:38.876366  473779 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:36:38.876458  473779 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-618070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:36:38.876550  473779 ssh_runner.go:195] Run: crio config
	I1101 10:36:38.965966  473779 cni.go:84] Creating CNI manager for ""
	I1101 10:36:38.965996  473779 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:36:38.966016  473779 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:36:38.966039  473779 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-618070 NodeName:embed-certs-618070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:36:38.966186  473779 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-618070"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:36:38.966255  473779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:36:38.974830  473779 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:36:38.974912  473779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:36:38.983141  473779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 10:36:38.996714  473779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:36:39.012743  473779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 10:36:39.027468  473779 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:36:39.031409  473779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:36:39.041837  473779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:36:39.206028  473779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:36:39.233345  473779 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070 for IP: 192.168.85.2
	I1101 10:36:39.233368  473779 certs.go:195] generating shared ca certs ...
	I1101 10:36:39.233392  473779 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:36:39.233535  473779 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:36:39.233579  473779 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:36:39.233589  473779 certs.go:257] generating profile certs ...
	I1101 10:36:39.233682  473779 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/client.key
	I1101 10:36:39.233770  473779 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.key.eb801fed
	I1101 10:36:39.233818  473779 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.key
	I1101 10:36:39.233923  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:36:39.233957  473779 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:36:39.233970  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:36:39.234010  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:36:39.234037  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:36:39.234064  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:36:39.234123  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:36:39.234716  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:36:39.273976  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:36:39.337823  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:36:39.398371  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:36:39.475662  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:36:39.531819  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:36:39.570423  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:36:39.595646  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:36:39.617688  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:36:39.641031  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:36:39.673206  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:36:39.694917  473779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:36:39.711364  473779 ssh_runner.go:195] Run: openssl version
	I1101 10:36:39.719690  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:36:39.731156  473779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:36:39.735561  473779 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:36:39.735631  473779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:36:39.784039  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:36:39.793495  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:36:39.803049  473779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:36:39.807415  473779 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:36:39.807483  473779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:36:39.859105  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:36:39.868210  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:36:39.878274  473779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:36:39.882944  473779 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:36:39.883019  473779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:36:39.934719  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:36:39.946412  473779 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:36:39.970115  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:36:40.076826  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:36:40.163714  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:36:40.252469  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:36:40.467029  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:36:40.635681  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:36:40.724484  473779 kubeadm.go:401] StartCluster: {Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:36:40.724570  473779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:36:40.724641  473779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:36:40.782646  473779 cri.go:89] found id: "847fba8996ed9a3711b5e855594bd200e40bf224b23742f55ae2e602d50b4764"
	I1101 10:36:40.782669  473779 cri.go:89] found id: "86afaef5fe9119b7c4301a84ac984fdf305581ba783077b0ffb0cfb22ca22a7f"
	I1101 10:36:40.782674  473779 cri.go:89] found id: "0d9c776cc885a82d3e1aeb688d3f68459e11c2cfc0c5d107c9fb9b3792e020a1"
	I1101 10:36:40.782687  473779 cri.go:89] found id: "c991117973d3b82d813a55a1584524c2e3edded68d94536c0ddb1c66b64c56ff"
	I1101 10:36:40.782691  473779 cri.go:89] found id: ""
	I1101 10:36:40.782742  473779 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:36:40.821134  473779 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:36:40Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:36:40.821232  473779 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:36:40.836971  473779 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:36:40.836991  473779 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:36:40.837043  473779 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:36:40.852971  473779 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:36:40.853580  473779 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-618070" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:36:40.853867  473779 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-618070" cluster setting kubeconfig missing "embed-certs-618070" context setting]
	I1101 10:36:40.854316  473779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:36:40.855770  473779 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:36:40.875689  473779 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:36:40.875723  473779 kubeadm.go:602] duration metric: took 38.726968ms to restartPrimaryControlPlane
	I1101 10:36:40.875733  473779 kubeadm.go:403] duration metric: took 151.258632ms to StartCluster
	I1101 10:36:40.875748  473779 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:36:40.875805  473779 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:36:40.877044  473779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:36:40.877258  473779 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:36:40.877540  473779 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:40.877582  473779 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:36:40.877647  473779 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-618070"
	I1101 10:36:40.877661  473779 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-618070"
	W1101 10:36:40.877672  473779 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:36:40.877712  473779 host.go:66] Checking if "embed-certs-618070" exists ...
	I1101 10:36:40.877990  473779 addons.go:70] Setting dashboard=true in profile "embed-certs-618070"
	I1101 10:36:40.878013  473779 addons.go:239] Setting addon dashboard=true in "embed-certs-618070"
	W1101 10:36:40.878021  473779 addons.go:248] addon dashboard should already be in state true
	I1101 10:36:40.878040  473779 host.go:66] Checking if "embed-certs-618070" exists ...
	I1101 10:36:40.878542  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:40.878955  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:40.879260  473779 addons.go:70] Setting default-storageclass=true in profile "embed-certs-618070"
	I1101 10:36:40.879299  473779 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-618070"
	I1101 10:36:40.879598  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:40.883937  473779 out.go:179] * Verifying Kubernetes components...
	I1101 10:36:40.891598  473779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:36:40.965445  473779 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:36:40.968499  473779 addons.go:239] Setting addon default-storageclass=true in "embed-certs-618070"
	W1101 10:36:40.968531  473779 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:36:40.968556  473779 host.go:66] Checking if "embed-certs-618070" exists ...
	I1101 10:36:40.968936  473779 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:36:40.968951  473779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:36:40.969019  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:40.969559  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:40.987700  473779 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:36:40.990842  473779 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1101 10:36:42.042172  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:44.055406  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	I1101 10:36:40.994251  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:36:40.994280  473779 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:36:40.994371  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:41.000971  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:41.029992  473779 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:36:41.030029  473779 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:36:41.030096  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:41.053280  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:41.077685  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:41.327282  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:36:41.327360  473779 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:36:41.399173  473779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:36:41.472383  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:36:41.472455  473779 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:36:41.478962  473779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:36:41.508030  473779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:36:41.551707  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:36:41.551784  473779 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:36:41.680976  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:36:41.680996  473779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:36:41.811972  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:36:41.811993  473779 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:36:41.882410  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:36:41.882432  473779 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:36:41.904703  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:36:41.904771  473779 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:36:41.932688  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:36:41.932762  473779 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:36:41.958745  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:36:41.958820  473779 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:36:42.005504  473779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:36:47.081438  473779 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.602367909s)
	I1101 10:36:47.081485  473779 node_ready.go:35] waiting up to 6m0s for node "embed-certs-618070" to be "Ready" ...
	I1101 10:36:47.081840  473779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.682589717s)
	I1101 10:36:47.165492  473779 node_ready.go:49] node "embed-certs-618070" is "Ready"
	I1101 10:36:47.165525  473779 node_ready.go:38] duration metric: took 84.011752ms for node "embed-certs-618070" to be "Ready" ...
	I1101 10:36:47.165548  473779 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:36:47.165604  473779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:36:48.316090  473779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.807985751s)
	I1101 10:36:48.316214  473779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.310624026s)
	I1101 10:36:48.316414  473779 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.150788924s)
	I1101 10:36:48.316437  473779 api_server.go:72] duration metric: took 7.439148634s to wait for apiserver process to appear ...
	I1101 10:36:48.316464  473779 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:36:48.316487  473779 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:36:48.319423  473779 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-618070 addons enable metrics-server
	
	I1101 10:36:48.322428  473779 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	W1101 10:36:46.538107  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:49.037214  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	I1101 10:36:48.325481  473779 addons.go:515] duration metric: took 7.44788004s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1101 10:36:48.331364  473779 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:36:48.331409  473779 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:36:48.816602  473779 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:36:48.833244  473779 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:36:48.834294  473779 api_server.go:141] control plane version: v1.34.1
	I1101 10:36:48.834363  473779 api_server.go:131] duration metric: took 517.88637ms to wait for apiserver health ...
	I1101 10:36:48.834388  473779 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:36:48.837753  473779 system_pods.go:59] 8 kube-system pods found
	I1101 10:36:48.837840  473779 system_pods.go:61] "coredns-66bc5c9577-6rf8b" [3db12ed9-30d4-45e6-9c67-8fa581fe4652] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:36:48.837866  473779 system_pods.go:61] "etcd-embed-certs-618070" [90e1511c-e9c4-4687-bd18-42a6032ca610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:36:48.837905  473779 system_pods.go:61] "kindnet-df7sw" [268ab883-4df6-47bd-8d25-523991f7a2d0] Running
	I1101 10:36:48.837931  473779 system_pods.go:61] "kube-apiserver-embed-certs-618070" [1be29177-a4d5-4272-a85a-a241133bf93d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:36:48.837955  473779 system_pods.go:61] "kube-controller-manager-embed-certs-618070" [3bb05f71-abcf-464e-8c7b-7e2d09df97aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:36:48.837993  473779 system_pods.go:61] "kube-proxy-8lcjb" [d2f8c1d5-fad6-4e84-af61-5152f65cf2bb] Running
	I1101 10:36:48.838025  473779 system_pods.go:61] "kube-scheduler-embed-certs-618070" [25897b84-2d6e-4bcd-adff-1d385013f52f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:36:48.838046  473779 system_pods.go:61] "storage-provisioner" [31534ded-cd9f-410b-abe9-f1992dd225bc] Running
	I1101 10:36:48.838080  473779 system_pods.go:74] duration metric: took 3.671622ms to wait for pod list to return data ...
	I1101 10:36:48.838107  473779 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:36:48.840619  473779 default_sa.go:45] found service account: "default"
	I1101 10:36:48.840682  473779 default_sa.go:55] duration metric: took 2.55187ms for default service account to be created ...
	I1101 10:36:48.840708  473779 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:36:48.845336  473779 system_pods.go:86] 8 kube-system pods found
	I1101 10:36:48.845375  473779 system_pods.go:89] "coredns-66bc5c9577-6rf8b" [3db12ed9-30d4-45e6-9c67-8fa581fe4652] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:36:48.845386  473779 system_pods.go:89] "etcd-embed-certs-618070" [90e1511c-e9c4-4687-bd18-42a6032ca610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:36:48.845392  473779 system_pods.go:89] "kindnet-df7sw" [268ab883-4df6-47bd-8d25-523991f7a2d0] Running
	I1101 10:36:48.845401  473779 system_pods.go:89] "kube-apiserver-embed-certs-618070" [1be29177-a4d5-4272-a85a-a241133bf93d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:36:48.845407  473779 system_pods.go:89] "kube-controller-manager-embed-certs-618070" [3bb05f71-abcf-464e-8c7b-7e2d09df97aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:36:48.845412  473779 system_pods.go:89] "kube-proxy-8lcjb" [d2f8c1d5-fad6-4e84-af61-5152f65cf2bb] Running
	I1101 10:36:48.845419  473779 system_pods.go:89] "kube-scheduler-embed-certs-618070" [25897b84-2d6e-4bcd-adff-1d385013f52f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:36:48.845423  473779 system_pods.go:89] "storage-provisioner" [31534ded-cd9f-410b-abe9-f1992dd225bc] Running
	I1101 10:36:48.845435  473779 system_pods.go:126] duration metric: took 4.714637ms to wait for k8s-apps to be running ...
	I1101 10:36:48.845448  473779 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:36:48.845504  473779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:36:48.865285  473779 system_svc.go:56] duration metric: took 19.827076ms WaitForService to wait for kubelet
	I1101 10:36:48.865316  473779 kubeadm.go:587] duration metric: took 7.988026371s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:36:48.865335  473779 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:36:48.872136  473779 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:36:48.872172  473779 node_conditions.go:123] node cpu capacity is 2
	I1101 10:36:48.872186  473779 node_conditions.go:105] duration metric: took 6.844739ms to run NodePressure ...
	I1101 10:36:48.872198  473779 start.go:242] waiting for startup goroutines ...
	I1101 10:36:48.872206  473779 start.go:247] waiting for cluster config update ...
	I1101 10:36:48.872221  473779 start.go:256] writing updated cluster config ...
	I1101 10:36:48.872529  473779 ssh_runner.go:195] Run: rm -f paused
	I1101 10:36:48.877184  473779 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:36:48.880892  473779 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6rf8b" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:36:50.891713  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:36:51.536676  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:53.537349  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:53.387188  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:36:55.888087  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:36:56.036664  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:58.039549  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:37:00.091096  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:58.386884  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:00.391377  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	I1101 10:37:01.538532  471219 pod_ready.go:94] pod "coredns-66bc5c9577-f8tc4" is "Ready"
	I1101 10:37:01.538561  471219 pod_ready.go:86] duration metric: took 34.507167842s for pod "coredns-66bc5c9577-f8tc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.541262  471219 pod_ready.go:83] waiting for pod "etcd-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.545957  471219 pod_ready.go:94] pod "etcd-no-preload-170467" is "Ready"
	I1101 10:37:01.545985  471219 pod_ready.go:86] duration metric: took 4.699293ms for pod "etcd-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.548104  471219 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.552352  471219 pod_ready.go:94] pod "kube-apiserver-no-preload-170467" is "Ready"
	I1101 10:37:01.552425  471219 pod_ready.go:86] duration metric: took 4.293648ms for pod "kube-apiserver-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.554528  471219 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.734829  471219 pod_ready.go:94] pod "kube-controller-manager-no-preload-170467" is "Ready"
	I1101 10:37:01.734912  471219 pod_ready.go:86] duration metric: took 180.360841ms for pod "kube-controller-manager-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.934947  471219 pod_ready.go:83] waiting for pod "kube-proxy-8fvnf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:02.335047  471219 pod_ready.go:94] pod "kube-proxy-8fvnf" is "Ready"
	I1101 10:37:02.335074  471219 pod_ready.go:86] duration metric: took 400.098246ms for pod "kube-proxy-8fvnf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:02.535140  471219 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:02.935373  471219 pod_ready.go:94] pod "kube-scheduler-no-preload-170467" is "Ready"
	I1101 10:37:02.935402  471219 pod_ready.go:86] duration metric: took 400.22894ms for pod "kube-scheduler-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:02.935415  471219 pod_ready.go:40] duration metric: took 35.908808363s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:37:02.991550  471219 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:37:02.994737  471219 out.go:179] * Done! kubectl is now configured to use "no-preload-170467" cluster and "default" namespace by default
	W1101 10:37:02.894232  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:05.386836  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:07.886128  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:09.886520  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:12.387626  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:14.887183  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.906917928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.925490979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.930324215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.956293924Z" level=info msg="Created container e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6/dashboard-metrics-scraper" id=2ace25d1-18e4-444d-9a10-09c1d1a6408d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.957930871Z" level=info msg="Starting container: e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd" id=5672d89a-eee1-4746-a19e-361dc2b7b3fe name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.964280846Z" level=info msg="Started container" PID=1626 containerID=e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6/dashboard-metrics-scraper id=5672d89a-eee1-4746-a19e-361dc2b7b3fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc08ffbfdfc36f10c47a15764ebfbc24a30d1a3ba1886ed37aa1667f58b99e02
	Nov 01 10:37:00 no-preload-170467 conmon[1624]: conmon e8b63b5e9f8d37ab01b3 <ninfo>: container 1626 exited with status 1
	Nov 01 10:37:01 no-preload-170467 crio[649]: time="2025-11-01T10:37:01.103694855Z" level=info msg="Removing container: d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26" id=aaee3017-a02f-4f71-a4ec-7dadd523d075 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:37:01 no-preload-170467 crio[649]: time="2025-11-01T10:37:01.117178033Z" level=info msg="Error loading conmon cgroup of container d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26: cgroup deleted" id=aaee3017-a02f-4f71-a4ec-7dadd523d075 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:37:01 no-preload-170467 crio[649]: time="2025-11-01T10:37:01.127188913Z" level=info msg="Removed container d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6/dashboard-metrics-scraper" id=aaee3017-a02f-4f71-a4ec-7dadd523d075 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.629220297Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.634082168Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.634116852Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.63413899Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.637386135Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.637421319Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.637446419Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.640641181Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.640674733Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.640697174Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.643839283Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.643872801Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.643894537Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.64684296Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.646876996Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e8b63b5e9f8d3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   dc08ffbfdfc36       dashboard-metrics-scraper-6ffb444bf9-674q6   kubernetes-dashboard
	ae42d8e4ab160       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   cd277f5c86ba5       storage-provisioner                          kube-system
	4978e3acc12ae       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago      Running             kubernetes-dashboard        0                   c5508de8aef32       kubernetes-dashboard-855c9754f9-k7scm        kubernetes-dashboard
	3c03303a31853       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   ed4b8d7cd7c4f       busybox                                      default
	2542d184d846f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   dfc59fc21faca       coredns-66bc5c9577-f8tc4                     kube-system
	aa0cefe2b636b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   3979057d870f6       kube-proxy-8fvnf                             kube-system
	2d334976880a3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   b5181abb56843       kindnet-5n4vx                                kube-system
	e34fa2d2c95db       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago      Exited              storage-provisioner         1                   cd277f5c86ba5       storage-provisioner                          kube-system
	92666d4844f6b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   97b54eca601d7       kube-controller-manager-no-preload-170467    kube-system
	463cb4a73c0ec       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   ec38191257046       kube-scheduler-no-preload-170467             kube-system
	a32e2d3237a2a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   9df1dc9f8d77a       etcd-no-preload-170467                       kube-system
	dfce63142cced       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   3c339fedaf02c       kube-apiserver-no-preload-170467             kube-system
	
	
	==> coredns [2542d184d846f8559dc4739455bb2da603e70043dd6f539aa02dd36184e7f96f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45959 - 34490 "HINFO IN 5945262584664602275.6701990702020595875. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017766469s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-170467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-170467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=no-preload-170467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:35:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-170467
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:37:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:36:56 +0000   Sat, 01 Nov 2025 10:35:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:36:56 +0000   Sat, 01 Nov 2025 10:35:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:36:56 +0000   Sat, 01 Nov 2025 10:35:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:36:56 +0000   Sat, 01 Nov 2025 10:35:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-170467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                a96dd2dd-60b3-4301-a26e-0deb5b7ad5c7
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-f8tc4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-no-preload-170467                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         114s
	  kube-system                 kindnet-5n4vx                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-170467              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-170467     200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-8fvnf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-170467              100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-674q6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-k7scm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 107s                 kube-proxy       
	  Normal   Starting                 52s                  kube-proxy       
	  Normal   Starting                 2m6s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node no-preload-170467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node no-preload-170467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node no-preload-170467 status is now: NodeHasSufficientPID
	  Normal   Starting                 115s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 115s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    114s                 kubelet          Node no-preload-170467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     114s                 kubelet          Node no-preload-170467 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  114s                 kubelet          Node no-preload-170467 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           110s                 node-controller  Node no-preload-170467 event: Registered Node no-preload-170467 in Controller
	  Normal   NodeReady                96s                  kubelet          Node no-preload-170467 status is now: NodeReady
	  Normal   Starting                 61s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 61s)    kubelet          Node no-preload-170467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 61s)    kubelet          Node no-preload-170467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 61s)    kubelet          Node no-preload-170467 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                  node-controller  Node no-preload-170467 event: Registered Node no-preload-170467 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a32e2d3237a2af02c8bb26acabd5b253db72f624e204b7da7d0f30cd2b961eda] <==
	{"level":"warn","ts":"2025-11-01T10:36:23.906198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:23.949968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:23.976972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:23.992032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.015393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.028272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.043859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.057232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.074131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.106972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.122608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.137082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.158794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.166691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.182108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.198397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.211382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.232755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.247340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.261625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.270257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.306180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.320819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.336887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.415873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59456","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:19 up  2:19,  0 user,  load average: 3.84, 4.22, 3.26
	Linux no-preload-170467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2d334976880a328aa72139d2bd78a22dd5ca66a3c58c97147961c3a55f5dfdb7] <==
	I1101 10:36:26.434161       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:36:26.436104       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:36:26.436302       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:36:26.436343       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:36:26.436383       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:36:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:36:26.625901       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:36:26.626012       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:36:26.626086       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:36:26.626894       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:36:56.626874       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:36:56.627081       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:36:56.627157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:36:56.627257       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:36:58.226733       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:36:58.226772       1 metrics.go:72] Registering metrics
	I1101 10:36:58.226830       1 controller.go:711] "Syncing nftables rules"
	I1101 10:37:06.628074       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:37:06.628932       1 main.go:301] handling current node
	I1101 10:37:16.628527       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:37:16.628601       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dfce63142ccedebc3c9346d9e3d23366f79ba77d408a006db59c49b63f4fc7c0] <==
	I1101 10:36:25.389130       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:36:25.389329       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:36:25.389372       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:36:25.399939       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:36:25.404392       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:36:25.404674       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:36:25.404717       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:36:25.423924       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:36:25.429179       1 policy_source.go:240] refreshing policies
	I1101 10:36:25.429308       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:36:25.430362       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:36:25.431074       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:36:25.441772       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:36:25.490327       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:36:25.864615       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:36:25.946192       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:36:25.946815       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:36:26.069712       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:36:26.159874       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:36:26.222240       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:36:26.438680       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.150.211"}
	I1101 10:36:26.462090       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.178.148"}
	I1101 10:36:29.163463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:36:29.261923       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:36:29.362470       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [92666d4844f6b3588b8743cdd07e1886645c89486d34a6c9f834dbddcf36cca7] <==
	I1101 10:36:28.808658       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:36:28.808850       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:36:28.808926       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:36:28.809036       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:36:28.809081       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:36:28.808938       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:36:28.809438       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:36:28.809826       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:36:28.816364       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:36:28.816515       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:36:28.816529       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:36:28.817272       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:36:28.817456       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:36:28.817935       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-170467"
	I1101 10:36:28.818033       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:36:28.821890       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:36:28.822819       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:36:28.839535       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:36:28.842659       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:36:28.844859       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:36:28.852299       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:36:28.852491       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:36:28.852831       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:36:28.853370       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:36:28.867845       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [aa0cefe2b636bf67720efda3df850d0b038d67c5882db88b2275ba2af1d5ad01] <==
	I1101 10:36:26.527157       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:36:26.603668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:36:26.704938       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:36:26.705055       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:36:26.705149       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:36:26.724833       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:36:26.724952       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:36:26.730930       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:36:26.731331       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:36:26.731543       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:36:26.732819       1 config.go:200] "Starting service config controller"
	I1101 10:36:26.732879       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:36:26.732921       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:36:26.732947       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:36:26.732987       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:36:26.733018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:36:26.734184       1 config.go:309] "Starting node config controller"
	I1101 10:36:26.734242       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:36:26.734272       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:36:26.833892       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:36:26.833895       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:36:26.833990       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [463cb4a73c0ec75555794f8ae2b5327835e1820527eae4f732bfe7662c895e04] <==
	I1101 10:36:22.234655       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:36:25.110263       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:36:25.113814       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:36:25.113925       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:36:25.113960       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:36:25.308601       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:36:25.308718       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:36:25.316294       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:36:25.316430       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:36:25.316459       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:36:25.316486       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:36:25.422093       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:36:29 no-preload-170467 kubelet[766]: I1101 10:36:29.580893     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/45decb05-6dbb-415c-98f8-ce914dcd1b97-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-674q6\" (UID: \"45decb05-6dbb-415c-98f8-ce914dcd1b97\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6"
	Nov 01 10:36:29 no-preload-170467 kubelet[766]: I1101 10:36:29.580939     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chnpk\" (UniqueName: \"kubernetes.io/projected/45decb05-6dbb-415c-98f8-ce914dcd1b97-kube-api-access-chnpk\") pod \"dashboard-metrics-scraper-6ffb444bf9-674q6\" (UID: \"45decb05-6dbb-415c-98f8-ce914dcd1b97\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6"
	Nov 01 10:36:29 no-preload-170467 kubelet[766]: I1101 10:36:29.580967     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f3881c3b-3785-428f-b5cc-cb419961b2a2-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-k7scm\" (UID: \"f3881c3b-3785-428f-b5cc-cb419961b2a2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-k7scm"
	Nov 01 10:36:29 no-preload-170467 kubelet[766]: W1101 10:36:29.799062     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/crio-c5508de8aef321561b38d858be063c6c900ea61b5f09baddb3503b4fbbd9828b WatchSource:0}: Error finding container c5508de8aef321561b38d858be063c6c900ea61b5f09baddb3503b4fbbd9828b: Status 404 returned error can't find the container with id c5508de8aef321561b38d858be063c6c900ea61b5f09baddb3503b4fbbd9828b
	Nov 01 10:36:29 no-preload-170467 kubelet[766]: W1101 10:36:29.812053     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/crio-dc08ffbfdfc36f10c47a15764ebfbc24a30d1a3ba1886ed37aa1667f58b99e02 WatchSource:0}: Error finding container dc08ffbfdfc36f10c47a15764ebfbc24a30d1a3ba1886ed37aa1667f58b99e02: Status 404 returned error can't find the container with id dc08ffbfdfc36f10c47a15764ebfbc24a30d1a3ba1886ed37aa1667f58b99e02
	Nov 01 10:36:31 no-preload-170467 kubelet[766]: I1101 10:36:31.334576     766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:36:43 no-preload-170467 kubelet[766]: I1101 10:36:43.008235     766 scope.go:117] "RemoveContainer" containerID="9eb8af57102862ac9e02dabc4cfecd36c26931772400d962b92566852ce5cf62"
	Nov 01 10:36:43 no-preload-170467 kubelet[766]: I1101 10:36:43.047156     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-k7scm" podStartSLOduration=8.340925264 podStartE2EDuration="14.047140001s" podCreationTimestamp="2025-11-01 10:36:29 +0000 UTC" firstStartedPulling="2025-11-01 10:36:29.802129136 +0000 UTC m=+11.232470605" lastFinishedPulling="2025-11-01 10:36:35.508343872 +0000 UTC m=+16.938685342" observedRunningTime="2025-11-01 10:36:36.032041269 +0000 UTC m=+17.462382739" watchObservedRunningTime="2025-11-01 10:36:43.047140001 +0000 UTC m=+24.477481471"
	Nov 01 10:36:44 no-preload-170467 kubelet[766]: I1101 10:36:44.012694     766 scope.go:117] "RemoveContainer" containerID="9eb8af57102862ac9e02dabc4cfecd36c26931772400d962b92566852ce5cf62"
	Nov 01 10:36:44 no-preload-170467 kubelet[766]: I1101 10:36:44.014519     766 scope.go:117] "RemoveContainer" containerID="d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26"
	Nov 01 10:36:44 no-preload-170467 kubelet[766]: E1101 10:36:44.015526     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-674q6_kubernetes-dashboard(45decb05-6dbb-415c-98f8-ce914dcd1b97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6" podUID="45decb05-6dbb-415c-98f8-ce914dcd1b97"
	Nov 01 10:36:45 no-preload-170467 kubelet[766]: I1101 10:36:45.017224     766 scope.go:117] "RemoveContainer" containerID="d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26"
	Nov 01 10:36:45 no-preload-170467 kubelet[766]: E1101 10:36:45.017402     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-674q6_kubernetes-dashboard(45decb05-6dbb-415c-98f8-ce914dcd1b97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6" podUID="45decb05-6dbb-415c-98f8-ce914dcd1b97"
	Nov 01 10:36:49 no-preload-170467 kubelet[766]: I1101 10:36:49.768641     766 scope.go:117] "RemoveContainer" containerID="d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26"
	Nov 01 10:36:49 no-preload-170467 kubelet[766]: E1101 10:36:49.769273     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-674q6_kubernetes-dashboard(45decb05-6dbb-415c-98f8-ce914dcd1b97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6" podUID="45decb05-6dbb-415c-98f8-ce914dcd1b97"
	Nov 01 10:36:57 no-preload-170467 kubelet[766]: I1101 10:36:57.049468     766 scope.go:117] "RemoveContainer" containerID="e34fa2d2c95db19a7ddd0638a9f24ddf5abba06508773a2f5a2a7fe781219862"
	Nov 01 10:37:00 no-preload-170467 kubelet[766]: I1101 10:37:00.903264     766 scope.go:117] "RemoveContainer" containerID="d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26"
	Nov 01 10:37:01 no-preload-170467 kubelet[766]: I1101 10:37:01.091574     766 scope.go:117] "RemoveContainer" containerID="d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26"
	Nov 01 10:37:01 no-preload-170467 kubelet[766]: I1101 10:37:01.092134     766 scope.go:117] "RemoveContainer" containerID="e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd"
	Nov 01 10:37:01 no-preload-170467 kubelet[766]: E1101 10:37:01.092760     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-674q6_kubernetes-dashboard(45decb05-6dbb-415c-98f8-ce914dcd1b97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6" podUID="45decb05-6dbb-415c-98f8-ce914dcd1b97"
	Nov 01 10:37:09 no-preload-170467 kubelet[766]: I1101 10:37:09.769079     766 scope.go:117] "RemoveContainer" containerID="e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd"
	Nov 01 10:37:09 no-preload-170467 kubelet[766]: E1101 10:37:09.769783     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-674q6_kubernetes-dashboard(45decb05-6dbb-415c-98f8-ce914dcd1b97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6" podUID="45decb05-6dbb-415c-98f8-ce914dcd1b97"
	Nov 01 10:37:16 no-preload-170467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:37:16 no-preload-170467 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:37:16 no-preload-170467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4978e3acc12ae303ca549d64a786644a09b443cca018b949c9ec3b02ef2b8b0b] <==
	2025/11/01 10:36:35 Starting overwatch
	2025/11/01 10:36:35 Using namespace: kubernetes-dashboard
	2025/11/01 10:36:35 Using in-cluster config to connect to apiserver
	2025/11/01 10:36:35 Using secret token for csrf signing
	2025/11/01 10:36:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:36:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:36:35 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:36:35 Generating JWE encryption key
	2025/11/01 10:36:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:36:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:36:36 Initializing JWE encryption key from synchronized object
	2025/11/01 10:36:36 Creating in-cluster Sidecar client
	2025/11/01 10:36:36 Serving insecurely on HTTP port: 9090
	2025/11/01 10:36:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:37:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [ae42d8e4ab16080f670c8ff2b53493af12b192a15fe23571a2dd1102d8b6c641] <==
	I1101 10:36:57.126400       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:36:57.157055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:36:57.157568       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:36:57.162806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:00.619374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:04.880074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:08.478551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:11.531858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:14.553872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:14.560979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:37:14.561131       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:37:14.561211       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43dd1037-8540-457f-804d-2dae616429c5", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-170467_ff9f9092-c0f2-4bb8-bf31-32e627cc0ed6 became leader
	I1101 10:37:14.561303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-170467_ff9f9092-c0f2-4bb8-bf31-32e627cc0ed6!
	W1101 10:37:14.567717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:14.573360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:37:14.661899       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-170467_ff9f9092-c0f2-4bb8-bf31-32e627cc0ed6!
	W1101 10:37:16.577265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:16.581964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:18.585616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:18.590431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e34fa2d2c95db19a7ddd0638a9f24ddf5abba06508773a2f5a2a7fe781219862] <==
	I1101 10:36:26.331383       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:36:56.333347       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-170467 -n no-preload-170467
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-170467 -n no-preload-170467: exit status 2 (394.396469ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-170467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-170467
helpers_test.go:243: (dbg) docker inspect no-preload-170467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174",
	        "Created": "2025-11-01T10:34:34.605945811Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471345,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:36:10.716147667Z",
	            "FinishedAt": "2025-11-01T10:36:09.841873992Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/hostname",
	        "HostsPath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/hosts",
	        "LogPath": "/var/lib/docker/containers/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174-json.log",
	        "Name": "/no-preload-170467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-170467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-170467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174",
	                "LowerDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c29291322727ebe821d2c5947f16527d8ef4b50b72fdcf429e6ed2be9a2b47bb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-170467",
	                "Source": "/var/lib/docker/volumes/no-preload-170467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-170467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-170467",
	                "name.minikube.sigs.k8s.io": "no-preload-170467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "680dbf312615a8da1d02c8a3bb317a19977cc836bd6d2ab4e37fc4d486ee6114",
	            "SandboxKey": "/var/run/docker/netns/680dbf312615",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-170467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:3d:90:fa:8c:85",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a76db7f9c768e30abf0f10f25f36c5fa2518f946ae0f8436a94ea13f0365a6d0",
	                    "EndpointID": "09ad27a8e9d90d5c9b5ddcb7b1fcb405a06aabe1b0960ac7012fe9a343b2d6f9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-170467",
	                        "496a258eae10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-170467 -n no-preload-170467
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-170467 -n no-preload-170467: exit status 2 (381.934997ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-170467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-170467 logs -n 25: (1.311110544s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-082900 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-082900    │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ delete  │ -p cert-options-082900                                                                                                                                                                                                                        │ cert-options-082900    │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │                     │
	│ stop    │ -p old-k8s-version-180313 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-180313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-459318 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ image   │ old-k8s-version-180313 image list --format=json                                                                                                                                                                                               │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ pause   │ -p old-k8s-version-180313 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:35 UTC │
	│ delete  │ -p cert-expiration-459318                                                                                                                                                                                                                     │ cert-expiration-459318 │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070     │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-170467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-170467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-618070     │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ stop    │ -p embed-certs-618070 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-618070     │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-618070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-618070     │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070     │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ image   │ no-preload-170467 image list --format=json                                                                                                                                                                                                    │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p no-preload-170467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-170467      │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:36:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:36:30.935812  473779 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:36:30.936457  473779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:30.936489  473779 out.go:374] Setting ErrFile to fd 2...
	I1101 10:36:30.936507  473779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:30.936791  473779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:36:30.937187  473779 out.go:368] Setting JSON to false
	I1101 10:36:30.938290  473779 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8340,"bootTime":1761985051,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:36:30.938385  473779 start.go:143] virtualization:  
	I1101 10:36:30.943616  473779 out.go:179] * [embed-certs-618070] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:36:30.946764  473779 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:36:30.946920  473779 notify.go:221] Checking for updates...
	I1101 10:36:30.953056  473779 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:36:30.956062  473779 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:36:30.958994  473779 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:36:30.961861  473779 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:36:30.964783  473779 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:36:30.968330  473779 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:30.968936  473779 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:36:30.998419  473779 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:36:30.998568  473779 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:31.138730  473779 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:36:31.126505613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:36:31.138841  473779 docker.go:319] overlay module found
	I1101 10:36:31.142052  473779 out.go:179] * Using the docker driver based on existing profile
	I1101 10:36:31.144919  473779 start.go:309] selected driver: docker
	I1101 10:36:31.144944  473779 start.go:930] validating driver "docker" against &{Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:36:31.145048  473779 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:36:31.145857  473779 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:31.241635  473779 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:36:31.231725628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:36:31.242016  473779 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:36:31.242043  473779 cni.go:84] Creating CNI manager for ""
	I1101 10:36:31.242117  473779 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:36:31.242165  473779 start.go:353] cluster config:
	{Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:36:31.248080  473779 out.go:179] * Starting "embed-certs-618070" primary control-plane node in "embed-certs-618070" cluster
	I1101 10:36:31.250957  473779 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:36:31.254123  473779 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:36:31.257152  473779 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:36:31.257215  473779 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:36:31.257231  473779 cache.go:59] Caching tarball of preloaded images
	I1101 10:36:31.257336  473779 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:36:31.257353  473779 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:36:31.257457  473779 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/config.json ...
	I1101 10:36:31.257805  473779 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:36:31.286738  473779 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:36:31.286765  473779 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:36:31.286790  473779 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:36:31.286812  473779 start.go:360] acquireMachinesLock for embed-certs-618070: {Name:mk13307b6a73c01f486aea48ffd4761ad677dd7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:31.286868  473779 start.go:364] duration metric: took 33.929µs to acquireMachinesLock for "embed-certs-618070"
	I1101 10:36:31.286892  473779 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:36:31.286898  473779 fix.go:54] fixHost starting: 
	I1101 10:36:31.287146  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:31.306935  473779 fix.go:112] recreateIfNeeded on embed-certs-618070: state=Stopped err=<nil>
	W1101 10:36:31.306990  473779 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:36:31.043213  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:33.537161  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	I1101 10:36:31.310314  473779 out.go:252] * Restarting existing docker container for "embed-certs-618070" ...
	I1101 10:36:31.310394  473779 cli_runner.go:164] Run: docker start embed-certs-618070
	I1101 10:36:31.662453  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:31.682390  473779 kic.go:430] container "embed-certs-618070" state is running.
	I1101 10:36:31.684207  473779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-618070
	I1101 10:36:31.712925  473779 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/config.json ...
	I1101 10:36:31.713152  473779 machine.go:94] provisionDockerMachine start ...
	I1101 10:36:31.713259  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:31.745489  473779 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:31.745893  473779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1101 10:36:31.745907  473779 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:36:31.746761  473779 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33482->127.0.0.1:33435: read: connection reset by peer
	I1101 10:36:34.923346  473779 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-618070
	
	I1101 10:36:34.923427  473779 ubuntu.go:182] provisioning hostname "embed-certs-618070"
	I1101 10:36:34.923524  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:34.948949  473779 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:34.949257  473779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1101 10:36:34.949277  473779 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-618070 && echo "embed-certs-618070" | sudo tee /etc/hostname
	I1101 10:36:35.131904  473779 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-618070
	
	I1101 10:36:35.132098  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:35.159298  473779 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:35.159617  473779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1101 10:36:35.159639  473779 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-618070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-618070/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-618070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:36:35.331378  473779 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:36:35.331461  473779 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:36:35.331506  473779 ubuntu.go:190] setting up certificates
	I1101 10:36:35.331558  473779 provision.go:84] configureAuth start
	I1101 10:36:35.331653  473779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-618070
	I1101 10:36:35.353626  473779 provision.go:143] copyHostCerts
	I1101 10:36:35.353724  473779 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:36:35.353741  473779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:36:35.353818  473779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:36:35.353918  473779 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:36:35.353923  473779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:36:35.353948  473779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:36:35.353996  473779 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:36:35.354001  473779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:36:35.354030  473779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:36:35.354074  473779 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.embed-certs-618070 san=[127.0.0.1 192.168.85.2 embed-certs-618070 localhost minikube]
	I1101 10:36:35.476490  473779 provision.go:177] copyRemoteCerts
	I1101 10:36:35.476609  473779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:36:35.476685  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:35.496953  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:35.616752  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:36:35.643122  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:36:35.668409  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:36:35.695882  473779 provision.go:87] duration metric: took 364.287717ms to configureAuth
	I1101 10:36:35.695913  473779 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:36:35.696112  473779 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:35.696236  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:35.735545  473779 main.go:143] libmachine: Using SSH client type: native
	I1101 10:36:35.735856  473779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1101 10:36:35.735871  473779 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:36:36.229037  473779 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:36:36.229073  473779 machine.go:97] duration metric: took 4.515909315s to provisionDockerMachine
	I1101 10:36:36.229085  473779 start.go:293] postStartSetup for "embed-certs-618070" (driver="docker")
	I1101 10:36:36.229096  473779 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:36:36.229165  473779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:36:36.229261  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:36.255696  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:36.384085  473779 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:36:36.389336  473779 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:36:36.389367  473779 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:36:36.389378  473779 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:36:36.389428  473779 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:36:36.389515  473779 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:36:36.389623  473779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:36:36.404740  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:36:36.436077  473779 start.go:296] duration metric: took 206.976627ms for postStartSetup
	I1101 10:36:36.436164  473779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:36:36.436215  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:36.456938  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:36.563457  473779 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:36:36.568448  473779 fix.go:56] duration metric: took 5.281542706s for fixHost
	I1101 10:36:36.568475  473779 start.go:83] releasing machines lock for "embed-certs-618070", held for 5.281593029s
	I1101 10:36:36.568562  473779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-618070
	I1101 10:36:36.589421  473779 ssh_runner.go:195] Run: cat /version.json
	I1101 10:36:36.589485  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:36.589738  473779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:36:36.589794  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:36.629764  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:36.632447  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:36.754380  473779 ssh_runner.go:195] Run: systemctl --version
	I1101 10:36:36.865051  473779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:36:36.937644  473779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:36:36.942784  473779 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:36:36.942899  473779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:36:36.952381  473779 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:36:36.952455  473779 start.go:496] detecting cgroup driver to use...
	I1101 10:36:36.952503  473779 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:36:36.952585  473779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:36:36.970811  473779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:36:36.985202  473779 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:36:36.985322  473779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:36:37.003611  473779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:36:37.025138  473779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:36:37.192365  473779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:36:37.348065  473779 docker.go:234] disabling docker service ...
	I1101 10:36:37.348210  473779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:36:37.366325  473779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:36:37.381322  473779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:36:37.558866  473779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:36:37.752262  473779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:36:37.767628  473779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:36:37.791049  473779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:36:37.791190  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.803022  473779 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:36:37.803145  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.812833  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.822181  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.831390  473779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:36:37.840564  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.851123  473779 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.862982  473779 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:36:37.879042  473779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:36:37.894326  473779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:36:37.903295  473779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:36:38.116393  473779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:36:38.593018  473779 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:36:38.593140  473779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:36:38.604544  473779 start.go:564] Will wait 60s for crictl version
	I1101 10:36:38.604691  473779 ssh_runner.go:195] Run: which crictl
	I1101 10:36:38.609154  473779 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:36:38.656838  473779 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:36:38.657023  473779 ssh_runner.go:195] Run: crio --version
	I1101 10:36:38.693032  473779 ssh_runner.go:195] Run: crio --version
	I1101 10:36:38.739264  473779 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1101 10:36:35.538879  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:37.538939  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:39.540444  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	I1101 10:36:38.742573  473779 cli_runner.go:164] Run: docker network inspect embed-certs-618070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:36:38.768457  473779 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:36:38.774714  473779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:36:38.790994  473779 kubeadm.go:884] updating cluster {Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:36:38.791117  473779 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:36:38.791168  473779 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:36:38.848046  473779 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:36:38.848071  473779 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:36:38.848128  473779 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:36:38.876333  473779 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:36:38.876358  473779 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:36:38.876366  473779 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:36:38.876458  473779 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-618070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:36:38.876550  473779 ssh_runner.go:195] Run: crio config
	I1101 10:36:38.965966  473779 cni.go:84] Creating CNI manager for ""
	I1101 10:36:38.965996  473779 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:36:38.966016  473779 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:36:38.966039  473779 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-618070 NodeName:embed-certs-618070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:36:38.966186  473779 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-618070"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:36:38.966255  473779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:36:38.974830  473779 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:36:38.974912  473779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:36:38.983141  473779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 10:36:38.996714  473779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:36:39.012743  473779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 10:36:39.027468  473779 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:36:39.031409  473779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:36:39.041837  473779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:36:39.206028  473779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:36:39.233345  473779 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070 for IP: 192.168.85.2
	I1101 10:36:39.233368  473779 certs.go:195] generating shared ca certs ...
	I1101 10:36:39.233392  473779 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:36:39.233535  473779 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:36:39.233579  473779 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:36:39.233589  473779 certs.go:257] generating profile certs ...
	I1101 10:36:39.233682  473779 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/client.key
	I1101 10:36:39.233770  473779 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.key.eb801fed
	I1101 10:36:39.233818  473779 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.key
	I1101 10:36:39.233923  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:36:39.233957  473779 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:36:39.233970  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:36:39.234010  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:36:39.234037  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:36:39.234064  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:36:39.234123  473779 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:36:39.234716  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:36:39.273976  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:36:39.337823  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:36:39.398371  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:36:39.475662  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:36:39.531819  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:36:39.570423  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:36:39.595646  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/embed-certs-618070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:36:39.617688  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:36:39.641031  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:36:39.673206  473779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:36:39.694917  473779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:36:39.711364  473779 ssh_runner.go:195] Run: openssl version
	I1101 10:36:39.719690  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:36:39.731156  473779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:36:39.735561  473779 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:36:39.735631  473779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:36:39.784039  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:36:39.793495  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:36:39.803049  473779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:36:39.807415  473779 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:36:39.807483  473779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:36:39.859105  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:36:39.868210  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:36:39.878274  473779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:36:39.882944  473779 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:36:39.883019  473779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:36:39.934719  473779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:36:39.946412  473779 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:36:39.970115  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:36:40.076826  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:36:40.163714  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:36:40.252469  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:36:40.467029  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:36:40.635681  473779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:36:40.724484  473779 kubeadm.go:401] StartCluster: {Name:embed-certs-618070 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-618070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:36:40.724570  473779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:36:40.724641  473779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:36:40.782646  473779 cri.go:89] found id: "847fba8996ed9a3711b5e855594bd200e40bf224b23742f55ae2e602d50b4764"
	I1101 10:36:40.782669  473779 cri.go:89] found id: "86afaef5fe9119b7c4301a84ac984fdf305581ba783077b0ffb0cfb22ca22a7f"
	I1101 10:36:40.782674  473779 cri.go:89] found id: "0d9c776cc885a82d3e1aeb688d3f68459e11c2cfc0c5d107c9fb9b3792e020a1"
	I1101 10:36:40.782687  473779 cri.go:89] found id: "c991117973d3b82d813a55a1584524c2e3edded68d94536c0ddb1c66b64c56ff"
	I1101 10:36:40.782691  473779 cri.go:89] found id: ""
	I1101 10:36:40.782742  473779 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:36:40.821134  473779 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:36:40Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:36:40.821232  473779 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:36:40.836971  473779 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:36:40.836991  473779 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:36:40.837043  473779 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:36:40.852971  473779 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:36:40.853580  473779 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-618070" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:36:40.853867  473779 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-618070" cluster setting kubeconfig missing "embed-certs-618070" context setting]
	I1101 10:36:40.854316  473779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:36:40.855770  473779 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:36:40.875689  473779 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:36:40.875723  473779 kubeadm.go:602] duration metric: took 38.726968ms to restartPrimaryControlPlane
	I1101 10:36:40.875733  473779 kubeadm.go:403] duration metric: took 151.258632ms to StartCluster
	I1101 10:36:40.875748  473779 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:36:40.875805  473779 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:36:40.877044  473779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:36:40.877258  473779 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:36:40.877540  473779 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:40.877582  473779 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:36:40.877647  473779 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-618070"
	I1101 10:36:40.877661  473779 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-618070"
	W1101 10:36:40.877672  473779 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:36:40.877712  473779 host.go:66] Checking if "embed-certs-618070" exists ...
	I1101 10:36:40.877990  473779 addons.go:70] Setting dashboard=true in profile "embed-certs-618070"
	I1101 10:36:40.878013  473779 addons.go:239] Setting addon dashboard=true in "embed-certs-618070"
	W1101 10:36:40.878021  473779 addons.go:248] addon dashboard should already be in state true
	I1101 10:36:40.878040  473779 host.go:66] Checking if "embed-certs-618070" exists ...
	I1101 10:36:40.878542  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:40.878955  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:40.879260  473779 addons.go:70] Setting default-storageclass=true in profile "embed-certs-618070"
	I1101 10:36:40.879299  473779 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-618070"
	I1101 10:36:40.879598  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:40.883937  473779 out.go:179] * Verifying Kubernetes components...
	I1101 10:36:40.891598  473779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:36:40.965445  473779 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:36:40.968499  473779 addons.go:239] Setting addon default-storageclass=true in "embed-certs-618070"
	W1101 10:36:40.968531  473779 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:36:40.968556  473779 host.go:66] Checking if "embed-certs-618070" exists ...
	I1101 10:36:40.968936  473779 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:36:40.968951  473779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:36:40.969019  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:40.969559  473779 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:36:40.987700  473779 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:36:40.990842  473779 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1101 10:36:42.042172  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:44.055406  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	I1101 10:36:40.994251  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:36:40.994280  473779 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:36:40.994371  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:41.000971  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:41.029992  473779 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:36:41.030029  473779 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:36:41.030096  473779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:36:41.053280  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:41.077685  473779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:36:41.327282  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:36:41.327360  473779 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:36:41.399173  473779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:36:41.472383  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:36:41.472455  473779 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:36:41.478962  473779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:36:41.508030  473779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:36:41.551707  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:36:41.551784  473779 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:36:41.680976  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:36:41.680996  473779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:36:41.811972  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:36:41.811993  473779 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:36:41.882410  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:36:41.882432  473779 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:36:41.904703  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:36:41.904771  473779 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:36:41.932688  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:36:41.932762  473779 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:36:41.958745  473779 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:36:41.958820  473779 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:36:42.005504  473779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:36:47.081438  473779 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.602367909s)
	I1101 10:36:47.081485  473779 node_ready.go:35] waiting up to 6m0s for node "embed-certs-618070" to be "Ready" ...
	I1101 10:36:47.081840  473779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.682589717s)
	I1101 10:36:47.165492  473779 node_ready.go:49] node "embed-certs-618070" is "Ready"
	I1101 10:36:47.165525  473779 node_ready.go:38] duration metric: took 84.011752ms for node "embed-certs-618070" to be "Ready" ...
	I1101 10:36:47.165548  473779 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:36:47.165604  473779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:36:48.316090  473779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.807985751s)
	I1101 10:36:48.316214  473779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.310624026s)
	I1101 10:36:48.316414  473779 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.150788924s)
	I1101 10:36:48.316437  473779 api_server.go:72] duration metric: took 7.439148634s to wait for apiserver process to appear ...
	I1101 10:36:48.316464  473779 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:36:48.316487  473779 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:36:48.319423  473779 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-618070 addons enable metrics-server
	
	I1101 10:36:48.322428  473779 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	W1101 10:36:46.538107  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:49.037214  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	I1101 10:36:48.325481  473779 addons.go:515] duration metric: took 7.44788004s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1101 10:36:48.331364  473779 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:36:48.331409  473779 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:36:48.816602  473779 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:36:48.833244  473779 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:36:48.834294  473779 api_server.go:141] control plane version: v1.34.1
	I1101 10:36:48.834363  473779 api_server.go:131] duration metric: took 517.88637ms to wait for apiserver health ...
	I1101 10:36:48.834388  473779 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:36:48.837753  473779 system_pods.go:59] 8 kube-system pods found
	I1101 10:36:48.837840  473779 system_pods.go:61] "coredns-66bc5c9577-6rf8b" [3db12ed9-30d4-45e6-9c67-8fa581fe4652] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:36:48.837866  473779 system_pods.go:61] "etcd-embed-certs-618070" [90e1511c-e9c4-4687-bd18-42a6032ca610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:36:48.837905  473779 system_pods.go:61] "kindnet-df7sw" [268ab883-4df6-47bd-8d25-523991f7a2d0] Running
	I1101 10:36:48.837931  473779 system_pods.go:61] "kube-apiserver-embed-certs-618070" [1be29177-a4d5-4272-a85a-a241133bf93d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:36:48.837955  473779 system_pods.go:61] "kube-controller-manager-embed-certs-618070" [3bb05f71-abcf-464e-8c7b-7e2d09df97aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:36:48.837993  473779 system_pods.go:61] "kube-proxy-8lcjb" [d2f8c1d5-fad6-4e84-af61-5152f65cf2bb] Running
	I1101 10:36:48.838025  473779 system_pods.go:61] "kube-scheduler-embed-certs-618070" [25897b84-2d6e-4bcd-adff-1d385013f52f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:36:48.838046  473779 system_pods.go:61] "storage-provisioner" [31534ded-cd9f-410b-abe9-f1992dd225bc] Running
	I1101 10:36:48.838080  473779 system_pods.go:74] duration metric: took 3.671622ms to wait for pod list to return data ...
	I1101 10:36:48.838107  473779 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:36:48.840619  473779 default_sa.go:45] found service account: "default"
	I1101 10:36:48.840682  473779 default_sa.go:55] duration metric: took 2.55187ms for default service account to be created ...
	I1101 10:36:48.840708  473779 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:36:48.845336  473779 system_pods.go:86] 8 kube-system pods found
	I1101 10:36:48.845375  473779 system_pods.go:89] "coredns-66bc5c9577-6rf8b" [3db12ed9-30d4-45e6-9c67-8fa581fe4652] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:36:48.845386  473779 system_pods.go:89] "etcd-embed-certs-618070" [90e1511c-e9c4-4687-bd18-42a6032ca610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:36:48.845392  473779 system_pods.go:89] "kindnet-df7sw" [268ab883-4df6-47bd-8d25-523991f7a2d0] Running
	I1101 10:36:48.845401  473779 system_pods.go:89] "kube-apiserver-embed-certs-618070" [1be29177-a4d5-4272-a85a-a241133bf93d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:36:48.845407  473779 system_pods.go:89] "kube-controller-manager-embed-certs-618070" [3bb05f71-abcf-464e-8c7b-7e2d09df97aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:36:48.845412  473779 system_pods.go:89] "kube-proxy-8lcjb" [d2f8c1d5-fad6-4e84-af61-5152f65cf2bb] Running
	I1101 10:36:48.845419  473779 system_pods.go:89] "kube-scheduler-embed-certs-618070" [25897b84-2d6e-4bcd-adff-1d385013f52f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:36:48.845423  473779 system_pods.go:89] "storage-provisioner" [31534ded-cd9f-410b-abe9-f1992dd225bc] Running
	I1101 10:36:48.845435  473779 system_pods.go:126] duration metric: took 4.714637ms to wait for k8s-apps to be running ...
	I1101 10:36:48.845448  473779 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:36:48.845504  473779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:36:48.865285  473779 system_svc.go:56] duration metric: took 19.827076ms WaitForService to wait for kubelet
	I1101 10:36:48.865316  473779 kubeadm.go:587] duration metric: took 7.988026371s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:36:48.865335  473779 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:36:48.872136  473779 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:36:48.872172  473779 node_conditions.go:123] node cpu capacity is 2
	I1101 10:36:48.872186  473779 node_conditions.go:105] duration metric: took 6.844739ms to run NodePressure ...
	I1101 10:36:48.872198  473779 start.go:242] waiting for startup goroutines ...
	I1101 10:36:48.872206  473779 start.go:247] waiting for cluster config update ...
	I1101 10:36:48.872221  473779 start.go:256] writing updated cluster config ...
	I1101 10:36:48.872529  473779 ssh_runner.go:195] Run: rm -f paused
	I1101 10:36:48.877184  473779 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:36:48.880892  473779 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6rf8b" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:36:50.891713  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:36:51.536676  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:53.537349  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:53.387188  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:36:55.888087  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:36:56.036664  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:58.039549  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:37:00.091096  471219 pod_ready.go:104] pod "coredns-66bc5c9577-f8tc4" is not "Ready", error: <nil>
	W1101 10:36:58.386884  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:00.391377  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	I1101 10:37:01.538532  471219 pod_ready.go:94] pod "coredns-66bc5c9577-f8tc4" is "Ready"
	I1101 10:37:01.538561  471219 pod_ready.go:86] duration metric: took 34.507167842s for pod "coredns-66bc5c9577-f8tc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.541262  471219 pod_ready.go:83] waiting for pod "etcd-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.545957  471219 pod_ready.go:94] pod "etcd-no-preload-170467" is "Ready"
	I1101 10:37:01.545985  471219 pod_ready.go:86] duration metric: took 4.699293ms for pod "etcd-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.548104  471219 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.552352  471219 pod_ready.go:94] pod "kube-apiserver-no-preload-170467" is "Ready"
	I1101 10:37:01.552425  471219 pod_ready.go:86] duration metric: took 4.293648ms for pod "kube-apiserver-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.554528  471219 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.734829  471219 pod_ready.go:94] pod "kube-controller-manager-no-preload-170467" is "Ready"
	I1101 10:37:01.734912  471219 pod_ready.go:86] duration metric: took 180.360841ms for pod "kube-controller-manager-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:01.934947  471219 pod_ready.go:83] waiting for pod "kube-proxy-8fvnf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:02.335047  471219 pod_ready.go:94] pod "kube-proxy-8fvnf" is "Ready"
	I1101 10:37:02.335074  471219 pod_ready.go:86] duration metric: took 400.098246ms for pod "kube-proxy-8fvnf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:02.535140  471219 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:02.935373  471219 pod_ready.go:94] pod "kube-scheduler-no-preload-170467" is "Ready"
	I1101 10:37:02.935402  471219 pod_ready.go:86] duration metric: took 400.22894ms for pod "kube-scheduler-no-preload-170467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:02.935415  471219 pod_ready.go:40] duration metric: took 35.908808363s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:37:02.991550  471219 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:37:02.994737  471219 out.go:179] * Done! kubectl is now configured to use "no-preload-170467" cluster and "default" namespace by default
	W1101 10:37:02.894232  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:05.386836  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:07.886128  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:09.886520  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:12.387626  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:14.887183  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.906917928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.925490979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.930324215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.956293924Z" level=info msg="Created container e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6/dashboard-metrics-scraper" id=2ace25d1-18e4-444d-9a10-09c1d1a6408d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.957930871Z" level=info msg="Starting container: e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd" id=5672d89a-eee1-4746-a19e-361dc2b7b3fe name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:37:00 no-preload-170467 crio[649]: time="2025-11-01T10:37:00.964280846Z" level=info msg="Started container" PID=1626 containerID=e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6/dashboard-metrics-scraper id=5672d89a-eee1-4746-a19e-361dc2b7b3fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc08ffbfdfc36f10c47a15764ebfbc24a30d1a3ba1886ed37aa1667f58b99e02
	Nov 01 10:37:00 no-preload-170467 conmon[1624]: conmon e8b63b5e9f8d37ab01b3 <ninfo>: container 1626 exited with status 1
	Nov 01 10:37:01 no-preload-170467 crio[649]: time="2025-11-01T10:37:01.103694855Z" level=info msg="Removing container: d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26" id=aaee3017-a02f-4f71-a4ec-7dadd523d075 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:37:01 no-preload-170467 crio[649]: time="2025-11-01T10:37:01.117178033Z" level=info msg="Error loading conmon cgroup of container d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26: cgroup deleted" id=aaee3017-a02f-4f71-a4ec-7dadd523d075 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:37:01 no-preload-170467 crio[649]: time="2025-11-01T10:37:01.127188913Z" level=info msg="Removed container d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6/dashboard-metrics-scraper" id=aaee3017-a02f-4f71-a4ec-7dadd523d075 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.629220297Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.634082168Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.634116852Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.63413899Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.637386135Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.637421319Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.637446419Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.640641181Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.640674733Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.640697174Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.643839283Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.643872801Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.643894537Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.64684296Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:06 no-preload-170467 crio[649]: time="2025-11-01T10:37:06.646876996Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e8b63b5e9f8d3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   dc08ffbfdfc36       dashboard-metrics-scraper-6ffb444bf9-674q6   kubernetes-dashboard
	ae42d8e4ab160       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   cd277f5c86ba5       storage-provisioner                          kube-system
	4978e3acc12ae       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   c5508de8aef32       kubernetes-dashboard-855c9754f9-k7scm        kubernetes-dashboard
	3c03303a31853       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   ed4b8d7cd7c4f       busybox                                      default
	2542d184d846f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   dfc59fc21faca       coredns-66bc5c9577-f8tc4                     kube-system
	aa0cefe2b636b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   3979057d870f6       kube-proxy-8fvnf                             kube-system
	2d334976880a3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   b5181abb56843       kindnet-5n4vx                                kube-system
	e34fa2d2c95db       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   cd277f5c86ba5       storage-provisioner                          kube-system
	92666d4844f6b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   97b54eca601d7       kube-controller-manager-no-preload-170467    kube-system
	463cb4a73c0ec       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ec38191257046       kube-scheduler-no-preload-170467             kube-system
	a32e2d3237a2a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   9df1dc9f8d77a       etcd-no-preload-170467                       kube-system
	dfce63142cced       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   3c339fedaf02c       kube-apiserver-no-preload-170467             kube-system
	
	
	==> coredns [2542d184d846f8559dc4739455bb2da603e70043dd6f539aa02dd36184e7f96f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45959 - 34490 "HINFO IN 5945262584664602275.6701990702020595875. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017766469s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-170467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-170467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=no-preload-170467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:35:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-170467
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:37:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:36:56 +0000   Sat, 01 Nov 2025 10:35:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:36:56 +0000   Sat, 01 Nov 2025 10:35:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:36:56 +0000   Sat, 01 Nov 2025 10:35:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:36:56 +0000   Sat, 01 Nov 2025 10:35:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-170467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                a96dd2dd-60b3-4301-a26e-0deb5b7ad5c7
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-f8tc4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-no-preload-170467                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-5n4vx                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-170467              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-170467     200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-8fvnf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-170467              100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-674q6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-k7scm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 110s                 kube-proxy       
	  Normal   Starting                 54s                  kube-proxy       
	  Normal   Starting                 2m8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-170467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-170467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-170467 status is now: NodeHasSufficientPID
	  Normal   Starting                 117s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node no-preload-170467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node no-preload-170467 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node no-preload-170467 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           112s                 node-controller  Node no-preload-170467 event: Registered Node no-preload-170467 in Controller
	  Normal   NodeReady                98s                  kubelet          Node no-preload-170467 status is now: NodeReady
	  Normal   Starting                 63s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 63s)    kubelet          Node no-preload-170467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 63s)    kubelet          Node no-preload-170467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 63s)    kubelet          Node no-preload-170467 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                  node-controller  Node no-preload-170467 event: Registered Node no-preload-170467 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a32e2d3237a2af02c8bb26acabd5b253db72f624e204b7da7d0f30cd2b961eda] <==
	{"level":"warn","ts":"2025-11-01T10:36:23.906198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:23.949968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:23.976972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:23.992032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.015393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.028272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.043859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.057232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.074131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.106972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.122608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.137082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.158794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.166691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.182108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.198397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.211382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.232755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.247340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.261625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.270257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.306180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.320819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.336887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:24.415873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59456","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:21 up  2:19,  0 user,  load average: 3.84, 4.22, 3.26
	Linux no-preload-170467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2d334976880a328aa72139d2bd78a22dd5ca66a3c58c97147961c3a55f5dfdb7] <==
	I1101 10:36:26.434161       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:36:26.436104       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:36:26.436302       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:36:26.436343       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:36:26.436383       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:36:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:36:26.625901       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:36:26.626012       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:36:26.626086       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:36:26.626894       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:36:56.626874       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:36:56.627081       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:36:56.627157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:36:56.627257       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:36:58.226733       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:36:58.226772       1 metrics.go:72] Registering metrics
	I1101 10:36:58.226830       1 controller.go:711] "Syncing nftables rules"
	I1101 10:37:06.628074       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:37:06.628932       1 main.go:301] handling current node
	I1101 10:37:16.628527       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:37:16.628601       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dfce63142ccedebc3c9346d9e3d23366f79ba77d408a006db59c49b63f4fc7c0] <==
	I1101 10:36:25.389130       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:36:25.389329       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:36:25.389372       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:36:25.399939       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:36:25.404392       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:36:25.404674       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:36:25.404717       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:36:25.423924       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:36:25.429179       1 policy_source.go:240] refreshing policies
	I1101 10:36:25.429308       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:36:25.430362       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:36:25.431074       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:36:25.441772       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:36:25.490327       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:36:25.864615       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:36:25.946192       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:36:25.946815       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:36:26.069712       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:36:26.159874       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:36:26.222240       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:36:26.438680       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.150.211"}
	I1101 10:36:26.462090       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.178.148"}
	I1101 10:36:29.163463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:36:29.261923       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:36:29.362470       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [92666d4844f6b3588b8743cdd07e1886645c89486d34a6c9f834dbddcf36cca7] <==
	I1101 10:36:28.808658       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:36:28.808850       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:36:28.808926       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:36:28.809036       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:36:28.809081       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:36:28.808938       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:36:28.809438       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:36:28.809826       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:36:28.816364       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:36:28.816515       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:36:28.816529       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:36:28.817272       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:36:28.817456       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:36:28.817935       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-170467"
	I1101 10:36:28.818033       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:36:28.821890       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:36:28.822819       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:36:28.839535       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:36:28.842659       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:36:28.844859       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:36:28.852299       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:36:28.852491       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:36:28.852831       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:36:28.853370       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:36:28.867845       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [aa0cefe2b636bf67720efda3df850d0b038d67c5882db88b2275ba2af1d5ad01] <==
	I1101 10:36:26.527157       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:36:26.603668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:36:26.704938       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:36:26.705055       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:36:26.705149       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:36:26.724833       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:36:26.724952       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:36:26.730930       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:36:26.731331       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:36:26.731543       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:36:26.732819       1 config.go:200] "Starting service config controller"
	I1101 10:36:26.732879       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:36:26.732921       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:36:26.732947       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:36:26.732987       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:36:26.733018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:36:26.734184       1 config.go:309] "Starting node config controller"
	I1101 10:36:26.734242       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:36:26.734272       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:36:26.833892       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:36:26.833895       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:36:26.833990       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [463cb4a73c0ec75555794f8ae2b5327835e1820527eae4f732bfe7662c895e04] <==
	I1101 10:36:22.234655       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:36:25.110263       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:36:25.113814       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:36:25.113925       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:36:25.113960       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:36:25.308601       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:36:25.308718       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:36:25.316294       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:36:25.316430       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:36:25.316459       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:36:25.316486       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:36:25.422093       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:36:29 no-preload-170467 kubelet[766]: I1101 10:36:29.580893     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/45decb05-6dbb-415c-98f8-ce914dcd1b97-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-674q6\" (UID: \"45decb05-6dbb-415c-98f8-ce914dcd1b97\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6"
	Nov 01 10:36:29 no-preload-170467 kubelet[766]: I1101 10:36:29.580939     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chnpk\" (UniqueName: \"kubernetes.io/projected/45decb05-6dbb-415c-98f8-ce914dcd1b97-kube-api-access-chnpk\") pod \"dashboard-metrics-scraper-6ffb444bf9-674q6\" (UID: \"45decb05-6dbb-415c-98f8-ce914dcd1b97\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6"
	Nov 01 10:36:29 no-preload-170467 kubelet[766]: I1101 10:36:29.580967     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f3881c3b-3785-428f-b5cc-cb419961b2a2-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-k7scm\" (UID: \"f3881c3b-3785-428f-b5cc-cb419961b2a2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-k7scm"
	Nov 01 10:36:29 no-preload-170467 kubelet[766]: W1101 10:36:29.799062     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/crio-c5508de8aef321561b38d858be063c6c900ea61b5f09baddb3503b4fbbd9828b WatchSource:0}: Error finding container c5508de8aef321561b38d858be063c6c900ea61b5f09baddb3503b4fbbd9828b: Status 404 returned error can't find the container with id c5508de8aef321561b38d858be063c6c900ea61b5f09baddb3503b4fbbd9828b
	Nov 01 10:36:29 no-preload-170467 kubelet[766]: W1101 10:36:29.812053     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/496a258eae1082adf6ecce0c7477bf6deb96531e9317afa44956789ee8d11174/crio-dc08ffbfdfc36f10c47a15764ebfbc24a30d1a3ba1886ed37aa1667f58b99e02 WatchSource:0}: Error finding container dc08ffbfdfc36f10c47a15764ebfbc24a30d1a3ba1886ed37aa1667f58b99e02: Status 404 returned error can't find the container with id dc08ffbfdfc36f10c47a15764ebfbc24a30d1a3ba1886ed37aa1667f58b99e02
	Nov 01 10:36:31 no-preload-170467 kubelet[766]: I1101 10:36:31.334576     766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:36:43 no-preload-170467 kubelet[766]: I1101 10:36:43.008235     766 scope.go:117] "RemoveContainer" containerID="9eb8af57102862ac9e02dabc4cfecd36c26931772400d962b92566852ce5cf62"
	Nov 01 10:36:43 no-preload-170467 kubelet[766]: I1101 10:36:43.047156     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-k7scm" podStartSLOduration=8.340925264 podStartE2EDuration="14.047140001s" podCreationTimestamp="2025-11-01 10:36:29 +0000 UTC" firstStartedPulling="2025-11-01 10:36:29.802129136 +0000 UTC m=+11.232470605" lastFinishedPulling="2025-11-01 10:36:35.508343872 +0000 UTC m=+16.938685342" observedRunningTime="2025-11-01 10:36:36.032041269 +0000 UTC m=+17.462382739" watchObservedRunningTime="2025-11-01 10:36:43.047140001 +0000 UTC m=+24.477481471"
	Nov 01 10:36:44 no-preload-170467 kubelet[766]: I1101 10:36:44.012694     766 scope.go:117] "RemoveContainer" containerID="9eb8af57102862ac9e02dabc4cfecd36c26931772400d962b92566852ce5cf62"
	Nov 01 10:36:44 no-preload-170467 kubelet[766]: I1101 10:36:44.014519     766 scope.go:117] "RemoveContainer" containerID="d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26"
	Nov 01 10:36:44 no-preload-170467 kubelet[766]: E1101 10:36:44.015526     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-674q6_kubernetes-dashboard(45decb05-6dbb-415c-98f8-ce914dcd1b97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6" podUID="45decb05-6dbb-415c-98f8-ce914dcd1b97"
	Nov 01 10:36:45 no-preload-170467 kubelet[766]: I1101 10:36:45.017224     766 scope.go:117] "RemoveContainer" containerID="d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26"
	Nov 01 10:36:45 no-preload-170467 kubelet[766]: E1101 10:36:45.017402     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-674q6_kubernetes-dashboard(45decb05-6dbb-415c-98f8-ce914dcd1b97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6" podUID="45decb05-6dbb-415c-98f8-ce914dcd1b97"
	Nov 01 10:36:49 no-preload-170467 kubelet[766]: I1101 10:36:49.768641     766 scope.go:117] "RemoveContainer" containerID="d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26"
	Nov 01 10:36:49 no-preload-170467 kubelet[766]: E1101 10:36:49.769273     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-674q6_kubernetes-dashboard(45decb05-6dbb-415c-98f8-ce914dcd1b97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6" podUID="45decb05-6dbb-415c-98f8-ce914dcd1b97"
	Nov 01 10:36:57 no-preload-170467 kubelet[766]: I1101 10:36:57.049468     766 scope.go:117] "RemoveContainer" containerID="e34fa2d2c95db19a7ddd0638a9f24ddf5abba06508773a2f5a2a7fe781219862"
	Nov 01 10:37:00 no-preload-170467 kubelet[766]: I1101 10:37:00.903264     766 scope.go:117] "RemoveContainer" containerID="d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26"
	Nov 01 10:37:01 no-preload-170467 kubelet[766]: I1101 10:37:01.091574     766 scope.go:117] "RemoveContainer" containerID="d363306bbc1c66a3f79511542f22d4ad0db6197f55e2b1e21d69e81d2e14ba26"
	Nov 01 10:37:01 no-preload-170467 kubelet[766]: I1101 10:37:01.092134     766 scope.go:117] "RemoveContainer" containerID="e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd"
	Nov 01 10:37:01 no-preload-170467 kubelet[766]: E1101 10:37:01.092760     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-674q6_kubernetes-dashboard(45decb05-6dbb-415c-98f8-ce914dcd1b97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6" podUID="45decb05-6dbb-415c-98f8-ce914dcd1b97"
	Nov 01 10:37:09 no-preload-170467 kubelet[766]: I1101 10:37:09.769079     766 scope.go:117] "RemoveContainer" containerID="e8b63b5e9f8d37ab01b34301cb1d7c145d6da3e5a4d98eaf8b38f0e3989fd8bd"
	Nov 01 10:37:09 no-preload-170467 kubelet[766]: E1101 10:37:09.769783     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-674q6_kubernetes-dashboard(45decb05-6dbb-415c-98f8-ce914dcd1b97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-674q6" podUID="45decb05-6dbb-415c-98f8-ce914dcd1b97"
	Nov 01 10:37:16 no-preload-170467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:37:16 no-preload-170467 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:37:16 no-preload-170467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4978e3acc12ae303ca549d64a786644a09b443cca018b949c9ec3b02ef2b8b0b] <==
	2025/11/01 10:36:35 Starting overwatch
	2025/11/01 10:36:35 Using namespace: kubernetes-dashboard
	2025/11/01 10:36:35 Using in-cluster config to connect to apiserver
	2025/11/01 10:36:35 Using secret token for csrf signing
	2025/11/01 10:36:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:36:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:36:35 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:36:35 Generating JWE encryption key
	2025/11/01 10:36:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:36:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:36:36 Initializing JWE encryption key from synchronized object
	2025/11/01 10:36:36 Creating in-cluster Sidecar client
	2025/11/01 10:36:36 Serving insecurely on HTTP port: 9090
	2025/11/01 10:36:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:37:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [ae42d8e4ab16080f670c8ff2b53493af12b192a15fe23571a2dd1102d8b6c641] <==
	I1101 10:36:57.126400       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:36:57.157055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:36:57.157568       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:36:57.162806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:00.619374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:04.880074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:08.478551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:11.531858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:14.553872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:14.560979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:37:14.561131       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:37:14.561211       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43dd1037-8540-457f-804d-2dae616429c5", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-170467_ff9f9092-c0f2-4bb8-bf31-32e627cc0ed6 became leader
	I1101 10:37:14.561303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-170467_ff9f9092-c0f2-4bb8-bf31-32e627cc0ed6!
	W1101 10:37:14.567717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:14.573360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:37:14.661899       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-170467_ff9f9092-c0f2-4bb8-bf31-32e627cc0ed6!
	W1101 10:37:16.577265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:16.581964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:18.585616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:18.590431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:20.593432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:20.598639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e34fa2d2c95db19a7ddd0638a9f24ddf5abba06508773a2f5a2a7fe781219862] <==
	I1101 10:36:26.331383       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:36:56.333347       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-170467 -n no-preload-170467
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-170467 -n no-preload-170467: exit status 2 (381.597915ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-170467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-618070 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-618070 --alsologtostderr -v=1: exit status 80 (2.412138059s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-618070 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:37:43.542138  479596 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:37:43.542258  479596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:37:43.542263  479596 out.go:374] Setting ErrFile to fd 2...
	I1101 10:37:43.542268  479596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:37:43.542625  479596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:37:43.542914  479596 out.go:368] Setting JSON to false
	I1101 10:37:43.542933  479596 mustload.go:66] Loading cluster: embed-certs-618070
	I1101 10:37:43.543590  479596 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:37:43.544256  479596 cli_runner.go:164] Run: docker container inspect embed-certs-618070 --format={{.State.Status}}
	I1101 10:37:43.577450  479596 host.go:66] Checking if "embed-certs-618070" exists ...
	I1101 10:37:43.577808  479596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:37:43.663010  479596 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 10:37:43.652079909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:37:43.663639  479596 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-618070 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:37:43.667285  479596 out.go:179] * Pausing node embed-certs-618070 ... 
	I1101 10:37:43.671052  479596 host.go:66] Checking if "embed-certs-618070" exists ...
	I1101 10:37:43.671415  479596 ssh_runner.go:195] Run: systemctl --version
	I1101 10:37:43.671489  479596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-618070
	I1101 10:37:43.691855  479596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/embed-certs-618070/id_rsa Username:docker}
	I1101 10:37:43.796627  479596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:37:43.818437  479596 pause.go:52] kubelet running: true
	I1101 10:37:43.818519  479596 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:37:44.181380  479596 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:37:44.181456  479596 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:37:44.283438  479596 cri.go:89] found id: "8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0"
	I1101 10:37:44.283465  479596 cri.go:89] found id: "4ddb4fef3b268154bc9e83ba2858fcb64e6baa4f2a44667a80d4995ab5d913ad"
	I1101 10:37:44.283470  479596 cri.go:89] found id: "d7ad380eee52f1fa60c6c143c18da47989d61aaba821322c0187925c8fde79af"
	I1101 10:37:44.283473  479596 cri.go:89] found id: "cb1254843ac79ef47142f6a8bc6ad54ed6322e797118f62073d8664938dddc43"
	I1101 10:37:44.283477  479596 cri.go:89] found id: "78070053967b8dc393db82612d91df7e0f712db3bfd50b12800aee9e57b0aa66"
	I1101 10:37:44.283481  479596 cri.go:89] found id: "847fba8996ed9a3711b5e855594bd200e40bf224b23742f55ae2e602d50b4764"
	I1101 10:37:44.283484  479596 cri.go:89] found id: "86afaef5fe9119b7c4301a84ac984fdf305581ba783077b0ffb0cfb22ca22a7f"
	I1101 10:37:44.283487  479596 cri.go:89] found id: "0d9c776cc885a82d3e1aeb688d3f68459e11c2cfc0c5d107c9fb9b3792e020a1"
	I1101 10:37:44.283490  479596 cri.go:89] found id: "c991117973d3b82d813a55a1584524c2e3edded68d94536c0ddb1c66b64c56ff"
	I1101 10:37:44.283496  479596 cri.go:89] found id: "1dc34fc298773848f5e6db5f9c2638ed705f08a90dea9b703b2d5fce5b2d9be9"
	I1101 10:37:44.283500  479596 cri.go:89] found id: "ea77073cf682212cbdff314bf42c52e6de94d41c312dd4240d84ecac9abeb1b9"
	I1101 10:37:44.283502  479596 cri.go:89] found id: ""
	I1101 10:37:44.283549  479596 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:37:44.296995  479596 retry.go:31] will retry after 305.598336ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:37:44Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:37:44.603566  479596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:37:44.622203  479596 pause.go:52] kubelet running: false
	I1101 10:37:44.622331  479596 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:37:44.828983  479596 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:37:44.829073  479596 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:37:44.913863  479596 cri.go:89] found id: "8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0"
	I1101 10:37:44.913883  479596 cri.go:89] found id: "4ddb4fef3b268154bc9e83ba2858fcb64e6baa4f2a44667a80d4995ab5d913ad"
	I1101 10:37:44.913888  479596 cri.go:89] found id: "d7ad380eee52f1fa60c6c143c18da47989d61aaba821322c0187925c8fde79af"
	I1101 10:37:44.913892  479596 cri.go:89] found id: "cb1254843ac79ef47142f6a8bc6ad54ed6322e797118f62073d8664938dddc43"
	I1101 10:37:44.913896  479596 cri.go:89] found id: "78070053967b8dc393db82612d91df7e0f712db3bfd50b12800aee9e57b0aa66"
	I1101 10:37:44.913900  479596 cri.go:89] found id: "847fba8996ed9a3711b5e855594bd200e40bf224b23742f55ae2e602d50b4764"
	I1101 10:37:44.913903  479596 cri.go:89] found id: "86afaef5fe9119b7c4301a84ac984fdf305581ba783077b0ffb0cfb22ca22a7f"
	I1101 10:37:44.913907  479596 cri.go:89] found id: "0d9c776cc885a82d3e1aeb688d3f68459e11c2cfc0c5d107c9fb9b3792e020a1"
	I1101 10:37:44.913910  479596 cri.go:89] found id: "c991117973d3b82d813a55a1584524c2e3edded68d94536c0ddb1c66b64c56ff"
	I1101 10:37:44.913918  479596 cri.go:89] found id: "1dc34fc298773848f5e6db5f9c2638ed705f08a90dea9b703b2d5fce5b2d9be9"
	I1101 10:37:44.913921  479596 cri.go:89] found id: "ea77073cf682212cbdff314bf42c52e6de94d41c312dd4240d84ecac9abeb1b9"
	I1101 10:37:44.913925  479596 cri.go:89] found id: ""
	I1101 10:37:44.913978  479596 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:37:44.927527  479596 retry.go:31] will retry after 518.968918ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:37:44Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:37:45.447295  479596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:37:45.464247  479596 pause.go:52] kubelet running: false
	I1101 10:37:45.464364  479596 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:37:45.740157  479596 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:37:45.740254  479596 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:37:45.845594  479596 cri.go:89] found id: "8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0"
	I1101 10:37:45.845622  479596 cri.go:89] found id: "4ddb4fef3b268154bc9e83ba2858fcb64e6baa4f2a44667a80d4995ab5d913ad"
	I1101 10:37:45.845628  479596 cri.go:89] found id: "d7ad380eee52f1fa60c6c143c18da47989d61aaba821322c0187925c8fde79af"
	I1101 10:37:45.845632  479596 cri.go:89] found id: "cb1254843ac79ef47142f6a8bc6ad54ed6322e797118f62073d8664938dddc43"
	I1101 10:37:45.845636  479596 cri.go:89] found id: "78070053967b8dc393db82612d91df7e0f712db3bfd50b12800aee9e57b0aa66"
	I1101 10:37:45.845640  479596 cri.go:89] found id: "847fba8996ed9a3711b5e855594bd200e40bf224b23742f55ae2e602d50b4764"
	I1101 10:37:45.845644  479596 cri.go:89] found id: "86afaef5fe9119b7c4301a84ac984fdf305581ba783077b0ffb0cfb22ca22a7f"
	I1101 10:37:45.845647  479596 cri.go:89] found id: "0d9c776cc885a82d3e1aeb688d3f68459e11c2cfc0c5d107c9fb9b3792e020a1"
	I1101 10:37:45.845650  479596 cri.go:89] found id: "c991117973d3b82d813a55a1584524c2e3edded68d94536c0ddb1c66b64c56ff"
	I1101 10:37:45.845667  479596 cri.go:89] found id: "1dc34fc298773848f5e6db5f9c2638ed705f08a90dea9b703b2d5fce5b2d9be9"
	I1101 10:37:45.845674  479596 cri.go:89] found id: "ea77073cf682212cbdff314bf42c52e6de94d41c312dd4240d84ecac9abeb1b9"
	I1101 10:37:45.845677  479596 cri.go:89] found id: ""
	I1101 10:37:45.845754  479596 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:37:45.861347  479596 out.go:203] 
	W1101 10:37:45.864146  479596 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:37:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:37:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:37:45.864349  479596 out.go:285] * 
	* 
	W1101 10:37:45.872329  479596 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:37:45.875335  479596 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-618070 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-618070
helpers_test.go:243: (dbg) docker inspect embed-certs-618070:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79",
	        "Created": "2025-11-01T10:34:43.970958066Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:36:31.345561077Z",
	            "FinishedAt": "2025-11-01T10:36:30.239715484Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/hosts",
	        "LogPath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79-json.log",
	        "Name": "/embed-certs-618070",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-618070:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-618070",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79",
	                "LowerDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-618070",
	                "Source": "/var/lib/docker/volumes/embed-certs-618070/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-618070",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-618070",
	                "name.minikube.sigs.k8s.io": "embed-certs-618070",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fce7d5de5836fa59d3fd7a28444fdd7d2e97908deea8834387bf40c4f458c701",
	            "SandboxKey": "/var/run/docker/netns/fce7d5de5836",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-618070": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:2a:1d:d0:21:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2a9320fc77e2ab7eae746fc7f855e8764c40a6520ae3423667b1ef82153e035d",
	                    "EndpointID": "d0dc8b75614ded11a8f71f7ba0da95bfc0066b108fd01a65c618c8261b9bbea0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-618070",
	                        "5b2cdd451242"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-618070 -n embed-certs-618070
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-618070 -n embed-certs-618070: exit status 2 (422.245853ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-618070 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-618070 logs -n 25: (1.627113914s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-459318       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ image   │ old-k8s-version-180313 image list --format=json                                                                                                                                                                                               │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ pause   │ -p old-k8s-version-180313 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:35 UTC │
	│ delete  │ -p cert-expiration-459318                                                                                                                                                                                                                     │ cert-expiration-459318       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-170467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-170467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ stop    │ -p embed-certs-618070 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-618070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ image   │ no-preload-170467 image list --format=json                                                                                                                                                                                                    │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p no-preload-170467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p disable-driver-mounts-416512                                                                                                                                                                                                               │ disable-driver-mounts-416512 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ image   │ embed-certs-618070 image list --format=json                                                                                                                                                                                                   │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-618070 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:37:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:37:25.562826  477629 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:37:25.563015  477629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:37:25.563027  477629 out.go:374] Setting ErrFile to fd 2...
	I1101 10:37:25.563033  477629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:37:25.563295  477629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:37:25.563805  477629 out.go:368] Setting JSON to false
	I1101 10:37:25.564854  477629 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8395,"bootTime":1761985051,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:37:25.564927  477629 start.go:143] virtualization:  
	I1101 10:37:25.568668  477629 out.go:179] * [default-k8s-diff-port-245904] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:37:25.572783  477629 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:37:25.572864  477629 notify.go:221] Checking for updates...
	I1101 10:37:25.578909  477629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:37:25.581800  477629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:37:25.585370  477629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:37:25.588363  477629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:37:25.591353  477629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:37:25.594867  477629 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:37:25.595034  477629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:37:25.619323  477629 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:37:25.619451  477629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:37:25.687563  477629 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:37:25.678904905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:37:25.687667  477629 docker.go:319] overlay module found
	I1101 10:37:25.690847  477629 out.go:179] * Using the docker driver based on user configuration
	I1101 10:37:25.693787  477629 start.go:309] selected driver: docker
	I1101 10:37:25.693808  477629 start.go:930] validating driver "docker" against <nil>
	I1101 10:37:25.693823  477629 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:37:25.694565  477629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:37:25.754449  477629 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:37:25.745211519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:37:25.754606  477629 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:37:25.754849  477629 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:37:25.757831  477629 out.go:179] * Using Docker driver with root privileges
	I1101 10:37:25.760509  477629 cni.go:84] Creating CNI manager for ""
	I1101 10:37:25.760569  477629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:37:25.760584  477629 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:37:25.760682  477629 start.go:353] cluster config:
	{Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:37:25.763935  477629 out.go:179] * Starting "default-k8s-diff-port-245904" primary control-plane node in "default-k8s-diff-port-245904" cluster
	I1101 10:37:25.766687  477629 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:37:25.769586  477629 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:37:25.772486  477629 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:37:25.772538  477629 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:37:25.772551  477629 cache.go:59] Caching tarball of preloaded images
	I1101 10:37:25.772582  477629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:37:25.772640  477629 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:37:25.772650  477629 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:37:25.772764  477629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/config.json ...
	I1101 10:37:25.772782  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/config.json: {Name:mkbca565f403e0cdd3933dcfff8dbc334db598ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:25.792021  477629 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:37:25.792046  477629 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:37:25.792063  477629 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:37:25.792094  477629 start.go:360] acquireMachinesLock for default-k8s-diff-port-245904: {Name:mkd19cff2a35f3bd59a365809e4cb064a7918a80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:37:25.792224  477629 start.go:364] duration metric: took 107.407µs to acquireMachinesLock for "default-k8s-diff-port-245904"
	I1101 10:37:25.792256  477629 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:37:25.792330  477629 start.go:125] createHost starting for "" (driver="docker")
	W1101 10:37:21.903020  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:24.388323  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	I1101 10:37:25.797490  477629 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:37:25.797739  477629 start.go:159] libmachine.API.Create for "default-k8s-diff-port-245904" (driver="docker")
	I1101 10:37:25.797785  477629 client.go:173] LocalClient.Create starting
	I1101 10:37:25.797858  477629 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem
	I1101 10:37:25.797896  477629 main.go:143] libmachine: Decoding PEM data...
	I1101 10:37:25.797912  477629 main.go:143] libmachine: Parsing certificate...
	I1101 10:37:25.797982  477629 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem
	I1101 10:37:25.798009  477629 main.go:143] libmachine: Decoding PEM data...
	I1101 10:37:25.798023  477629 main.go:143] libmachine: Parsing certificate...
	I1101 10:37:25.798389  477629 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-245904 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:37:25.814811  477629 cli_runner.go:211] docker network inspect default-k8s-diff-port-245904 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:37:25.814912  477629 network_create.go:284] running [docker network inspect default-k8s-diff-port-245904] to gather additional debugging logs...
	I1101 10:37:25.814936  477629 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-245904
	W1101 10:37:25.830398  477629 cli_runner.go:211] docker network inspect default-k8s-diff-port-245904 returned with exit code 1
	I1101 10:37:25.830433  477629 network_create.go:287] error running [docker network inspect default-k8s-diff-port-245904]: docker network inspect default-k8s-diff-port-245904: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-245904 not found
	I1101 10:37:25.830449  477629 network_create.go:289] output of [docker network inspect default-k8s-diff-port-245904]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-245904 not found
	
	** /stderr **
	I1101 10:37:25.830567  477629 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:37:25.847443  477629 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b4026c1b0063 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ce:bd:30:c3:d1} reservation:<nil>}
	I1101 10:37:25.847832  477629 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e394bead07b9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:98:c6:36:ba:b7} reservation:<nil>}
	I1101 10:37:25.848058  477629 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bd8719a80444 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:75:48:52:a5:ee} reservation:<nil>}
	I1101 10:37:25.848518  477629 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cb8b0}
	I1101 10:37:25.848540  477629 network_create.go:124] attempt to create docker network default-k8s-diff-port-245904 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 10:37:25.848595  477629 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-245904 default-k8s-diff-port-245904
	I1101 10:37:25.918303  477629 network_create.go:108] docker network default-k8s-diff-port-245904 192.168.76.0/24 created
	I1101 10:37:25.918334  477629 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-245904" container
	I1101 10:37:25.918426  477629 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:37:25.934484  477629 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-245904 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-245904 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:37:25.957220  477629 oci.go:103] Successfully created a docker volume default-k8s-diff-port-245904
	I1101 10:37:25.957310  477629 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-245904-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-245904 --entrypoint /usr/bin/test -v default-k8s-diff-port-245904:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:37:26.511242  477629 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-245904
	I1101 10:37:26.511309  477629 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:37:26.511329  477629 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:37:26.511409  477629 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-245904:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 10:37:26.887304  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:29.386979  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	I1101 10:37:29.886886  473779 pod_ready.go:94] pod "coredns-66bc5c9577-6rf8b" is "Ready"
	I1101 10:37:29.886915  473779 pod_ready.go:86] duration metric: took 41.005993929s for pod "coredns-66bc5c9577-6rf8b" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:29.890127  473779 pod_ready.go:83] waiting for pod "etcd-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:29.894682  473779 pod_ready.go:94] pod "etcd-embed-certs-618070" is "Ready"
	I1101 10:37:29.894705  473779 pod_ready.go:86] duration metric: took 4.553384ms for pod "etcd-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:29.896903  473779 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:29.902093  473779 pod_ready.go:94] pod "kube-apiserver-embed-certs-618070" is "Ready"
	I1101 10:37:29.902117  473779 pod_ready.go:86] duration metric: took 5.145519ms for pod "kube-apiserver-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:29.904328  473779 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:30.088227  473779 pod_ready.go:94] pod "kube-controller-manager-embed-certs-618070" is "Ready"
	I1101 10:37:30.088264  473779 pod_ready.go:86] duration metric: took 183.915455ms for pod "kube-controller-manager-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:30.286127  473779 pod_ready.go:83] waiting for pod "kube-proxy-8lcjb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:30.685413  473779 pod_ready.go:94] pod "kube-proxy-8lcjb" is "Ready"
	I1101 10:37:30.685482  473779 pod_ready.go:86] duration metric: took 399.278243ms for pod "kube-proxy-8lcjb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:30.884757  473779 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:31.285099  473779 pod_ready.go:94] pod "kube-scheduler-embed-certs-618070" is "Ready"
	I1101 10:37:31.285128  473779 pod_ready.go:86] duration metric: took 400.344074ms for pod "kube-scheduler-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:31.285140  473779 pod_ready.go:40] duration metric: took 42.40792431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:37:31.359935  473779 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:37:31.362681  473779 out.go:179] * Done! kubectl is now configured to use "embed-certs-618070" cluster and "default" namespace by default
	I1101 10:37:30.954227  477629 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-245904:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.442760705s)
	I1101 10:37:30.954257  477629 kic.go:203] duration metric: took 4.442924688s to extract preloaded images to volume ...
	W1101 10:37:30.954410  477629 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:37:30.954535  477629 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:37:31.019726  477629 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-245904 --name default-k8s-diff-port-245904 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-245904 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-245904 --network default-k8s-diff-port-245904 --ip 192.168.76.2 --volume default-k8s-diff-port-245904:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:37:31.359625  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Running}}
	I1101 10:37:31.416214  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:37:31.459568  477629 cli_runner.go:164] Run: docker exec default-k8s-diff-port-245904 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:37:31.528891  477629 oci.go:144] the created container "default-k8s-diff-port-245904" has a running status.
	I1101 10:37:31.528917  477629 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa...
	I1101 10:37:31.760241  477629 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:37:31.794864  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:37:31.816807  477629 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:37:31.816829  477629 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-245904 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:37:31.890735  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:37:32.001842  477629 machine.go:94] provisionDockerMachine start ...
	I1101 10:37:32.001945  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:32.031997  477629 main.go:143] libmachine: Using SSH client type: native
	I1101 10:37:32.032324  477629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1101 10:37:32.032334  477629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:37:32.033054  477629 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:37:35.189425  477629 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-245904
	
	I1101 10:37:35.189450  477629 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-245904"
	I1101 10:37:35.189523  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:35.207602  477629 main.go:143] libmachine: Using SSH client type: native
	I1101 10:37:35.207895  477629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1101 10:37:35.207907  477629 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-245904 && echo "default-k8s-diff-port-245904" | sudo tee /etc/hostname
	I1101 10:37:35.368860  477629 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-245904
	
	I1101 10:37:35.368940  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:35.387368  477629 main.go:143] libmachine: Using SSH client type: native
	I1101 10:37:35.387694  477629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1101 10:37:35.387718  477629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-245904' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-245904/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-245904' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:37:35.538514  477629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:37:35.538598  477629 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:37:35.538624  477629 ubuntu.go:190] setting up certificates
	I1101 10:37:35.538642  477629 provision.go:84] configureAuth start
	I1101 10:37:35.538706  477629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:37:35.556283  477629 provision.go:143] copyHostCerts
	I1101 10:37:35.556353  477629 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:37:35.556368  477629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:37:35.556450  477629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:37:35.556557  477629 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:37:35.556569  477629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:37:35.556598  477629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:37:35.556656  477629 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:37:35.556665  477629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:37:35.556692  477629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:37:35.556750  477629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-245904 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-245904 localhost minikube]
	I1101 10:37:36.671618  477629 provision.go:177] copyRemoteCerts
	I1101 10:37:36.671694  477629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:37:36.671736  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:36.688743  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:37:36.797541  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:37:36.816854  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:37:36.835673  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 10:37:36.853299  477629 provision.go:87] duration metric: took 1.314620839s to configureAuth
	I1101 10:37:36.853328  477629 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:37:36.853557  477629 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:37:36.853670  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:36.870961  477629 main.go:143] libmachine: Using SSH client type: native
	I1101 10:37:36.871270  477629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1101 10:37:36.871290  477629 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:37:37.208099  477629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:37:37.208162  477629 machine.go:97] duration metric: took 5.206296914s to provisionDockerMachine
	I1101 10:37:37.208178  477629 client.go:176] duration metric: took 11.410382914s to LocalClient.Create
	I1101 10:37:37.208196  477629 start.go:167] duration metric: took 11.410459141s to libmachine.API.Create "default-k8s-diff-port-245904"
	I1101 10:37:37.208205  477629 start.go:293] postStartSetup for "default-k8s-diff-port-245904" (driver="docker")
	I1101 10:37:37.208218  477629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:37:37.208297  477629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:37:37.208338  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:37.227311  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:37:37.333812  477629 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:37:37.337068  477629 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:37:37.337100  477629 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:37:37.337111  477629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:37:37.337165  477629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:37:37.337252  477629 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:37:37.337358  477629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:37:37.344780  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:37:37.362628  477629 start.go:296] duration metric: took 154.405965ms for postStartSetup
	I1101 10:37:37.362995  477629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:37:37.379341  477629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/config.json ...
	I1101 10:37:37.379625  477629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:37:37.379676  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:37.395873  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:37:37.498853  477629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:37:37.503889  477629 start.go:128] duration metric: took 11.711543742s to createHost
	I1101 10:37:37.503918  477629 start.go:83] releasing machines lock for "default-k8s-diff-port-245904", held for 11.711680352s
	I1101 10:37:37.503992  477629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:37:37.521397  477629 ssh_runner.go:195] Run: cat /version.json
	I1101 10:37:37.521456  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:37.521740  477629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:37:37.521812  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:37.548964  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:37:37.549503  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:37:37.649305  477629 ssh_runner.go:195] Run: systemctl --version
	I1101 10:37:37.745181  477629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:37:37.781278  477629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:37:37.786348  477629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:37:37.786424  477629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:37:37.817133  477629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:37:37.817199  477629 start.go:496] detecting cgroup driver to use...
	I1101 10:37:37.817253  477629 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:37:37.817335  477629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:37:37.836773  477629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:37:37.849479  477629 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:37:37.849566  477629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:37:37.867615  477629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:37:37.886293  477629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:37:38.011881  477629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:37:38.151133  477629 docker.go:234] disabling docker service ...
	I1101 10:37:38.151213  477629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:37:38.176137  477629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:37:38.188862  477629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:37:38.301904  477629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:37:38.419151  477629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:37:38.432506  477629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:37:38.455963  477629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:37:38.456052  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.465219  477629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:37:38.465306  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.475043  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.484075  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.493280  477629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:37:38.501825  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.511421  477629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.524961  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.534125  477629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:37:38.541900  477629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:37:38.549540  477629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:37:38.668046  477629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:37:38.802962  477629 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:37:38.803099  477629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:37:38.807129  477629 start.go:564] Will wait 60s for crictl version
	I1101 10:37:38.807243  477629 ssh_runner.go:195] Run: which crictl
	I1101 10:37:38.811442  477629 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:37:38.840925  477629 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:37:38.841088  477629 ssh_runner.go:195] Run: crio --version
	I1101 10:37:38.871757  477629 ssh_runner.go:195] Run: crio --version
	I1101 10:37:38.910884  477629 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:37:38.913823  477629 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-245904 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:37:38.930725  477629 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:37:38.934739  477629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:37:38.945145  477629 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:37:38.945258  477629 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:37:38.945319  477629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:37:38.980962  477629 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:37:38.980984  477629 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:37:38.981038  477629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:37:39.007951  477629 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:37:39.008028  477629 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:37:39.008052  477629 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1101 10:37:39.008178  477629 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-245904 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:37:39.008312  477629 ssh_runner.go:195] Run: crio config
	I1101 10:37:39.072640  477629 cni.go:84] Creating CNI manager for ""
	I1101 10:37:39.072663  477629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:37:39.072698  477629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:37:39.072729  477629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-245904 NodeName:default-k8s-diff-port-245904 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:37:39.072914  477629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-245904"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:37:39.073027  477629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:37:39.081512  477629 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:37:39.081611  477629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:37:39.089758  477629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 10:37:39.103560  477629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:37:39.117987  477629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 10:37:39.131533  477629 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:37:39.135678  477629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:37:39.146102  477629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:37:39.263798  477629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:37:39.280722  477629 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904 for IP: 192.168.76.2
	I1101 10:37:39.280788  477629 certs.go:195] generating shared ca certs ...
	I1101 10:37:39.280820  477629 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:39.280992  477629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:37:39.281058  477629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:37:39.281079  477629 certs.go:257] generating profile certs ...
	I1101 10:37:39.281169  477629 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.key
	I1101 10:37:39.281204  477629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt with IP's: []
	I1101 10:37:39.645855  477629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt ...
	I1101 10:37:39.645929  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: {Name:mk8ad628d37fdb588d82aafbedc4619d7f9478f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:39.646176  477629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.key ...
	I1101 10:37:39.646217  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.key: {Name:mk8759444d5de8993a166f80ca979d8d746bc17f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:39.646379  477629 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key.52ff7e67
	I1101 10:37:39.646428  477629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt.52ff7e67 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:37:40.116771  477629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt.52ff7e67 ...
	I1101 10:37:40.116814  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt.52ff7e67: {Name:mkb380ade971a21de38d629b2ad10318230261ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:40.117017  477629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key.52ff7e67 ...
	I1101 10:37:40.117033  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key.52ff7e67: {Name:mk21f97ccca821c56a550cbec2cb59c013577e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:40.117127  477629 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt.52ff7e67 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt
	I1101 10:37:40.117210  477629 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key.52ff7e67 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key
	I1101 10:37:40.117278  477629 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key
	I1101 10:37:40.117297  477629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.crt with IP's: []
	I1101 10:37:40.785324  477629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.crt ...
	I1101 10:37:40.785356  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.crt: {Name:mk6da3beeb913cc956cfb118470f557021fbdb0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:40.785546  477629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key ...
	I1101 10:37:40.785561  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key: {Name:mk0212b138bf862420a8b0c822c293a15aef2949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:40.785783  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:37:40.785825  477629 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:37:40.785839  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:37:40.785862  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:37:40.785891  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:37:40.785917  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:37:40.785969  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:37:40.786579  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:37:40.806739  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:37:40.825980  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:37:40.845502  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:37:40.865494  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 10:37:40.885841  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:37:40.918222  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:37:40.939589  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:37:40.963506  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:37:40.987453  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:37:41.008562  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:37:41.029902  477629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:37:41.044594  477629 ssh_runner.go:195] Run: openssl version
	I1101 10:37:41.051365  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:37:41.060532  477629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:37:41.064447  477629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:37:41.064511  477629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:37:41.108196  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:37:41.116800  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:37:41.125460  477629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:37:41.129580  477629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:37:41.129685  477629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:37:41.171448  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:37:41.180058  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:37:41.188485  477629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:37:41.192453  477629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:37:41.192525  477629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:37:41.233620  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:37:41.242477  477629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:37:41.246674  477629 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:37:41.246737  477629 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:37:41.246820  477629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:37:41.246887  477629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:37:41.276908  477629 cri.go:89] found id: ""
	I1101 10:37:41.276987  477629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:37:41.285368  477629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:37:41.293794  477629 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:37:41.293871  477629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:37:41.301889  477629 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:37:41.301911  477629 kubeadm.go:158] found existing configuration files:
	
	I1101 10:37:41.301969  477629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 10:37:41.310305  477629 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:37:41.310374  477629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:37:41.318338  477629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 10:37:41.326656  477629 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:37:41.326719  477629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:37:41.334845  477629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 10:37:41.342647  477629 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:37:41.342721  477629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:37:41.350213  477629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 10:37:41.358700  477629 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:37:41.358786  477629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:37:41.366737  477629 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:37:41.407955  477629 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:37:41.408106  477629 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:37:41.432289  477629 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:37:41.432480  477629 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:37:41.432558  477629 kubeadm.go:319] OS: Linux
	I1101 10:37:41.432642  477629 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:37:41.432758  477629 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:37:41.432856  477629 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:37:41.432958  477629 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:37:41.433066  477629 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:37:41.433169  477629 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:37:41.433230  477629 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:37:41.433290  477629 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:37:41.433342  477629 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:37:41.506314  477629 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:37:41.506433  477629 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:37:41.506543  477629 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:37:41.514603  477629 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:37:41.518890  477629 out.go:252]   - Generating certificates and keys ...
	I1101 10:37:41.519008  477629 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:37:41.519088  477629 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:37:41.701864  477629 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:37:42.108045  477629 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:37:42.329198  477629 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:37:43.237938  477629 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:37:43.646493  477629 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:37:43.646667  477629 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-245904 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:37:44.644157  477629 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:37:44.644628  477629 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-245904 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:37:45.173531  477629 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:37:45.376567  477629 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	
	
	==> CRI-O <==
	Nov 01 10:37:15 embed-certs-618070 crio[651]: time="2025-11-01T10:37:15.784238197Z" level=info msg="Removed container c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt/dashboard-metrics-scraper" id=578055da-8e2b-4514-b72c-f4426e9111c3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:37:18 embed-certs-618070 conmon[1145]: conmon d7ad380eee52f1fa60c6 <ninfo>: container 1148 exited with status 1
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.766214149Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8102f4a7-6fdc-4b59-8c54-7802acc01953 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.767367147Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8634e728-10b9-4732-9133-1be7aba9445f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.770801126Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=78dbd01f-a039-44f5-872d-3d3c4981ef71 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.770941092Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.779817868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.780009354Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8d56b0753ac1dcae3f5ad2a500ffb12922845de70c01293535daefce542aaa83/merged/etc/passwd: no such file or directory"
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.780032518Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8d56b0753ac1dcae3f5ad2a500ffb12922845de70c01293535daefce542aaa83/merged/etc/group: no such file or directory"
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.780335096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.812020078Z" level=info msg="Created container 8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0: kube-system/storage-provisioner/storage-provisioner" id=78dbd01f-a039-44f5-872d-3d3c4981ef71 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.81547105Z" level=info msg="Starting container: 8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0" id=92751592-037e-4e71-b472-607e509bc8d3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.820159937Z" level=info msg="Started container" PID=1643 containerID=8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0 description=kube-system/storage-provisioner/storage-provisioner id=92751592-037e-4e71-b472-607e509bc8d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b98d1eadce81f1315badca97c9029b3bb86a4e8227522c15dfa5a04e245913e5
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.295639891Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.301130475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.301166439Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.301191752Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.305359496Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.305515831Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.305600288Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.309130621Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.309282075Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.309356899Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.314345483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.314516777Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8c65bde628c4d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   b98d1eadce81f       storage-provisioner                          kube-system
	1dc34fc298773       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   990603bcc3246       dashboard-metrics-scraper-6ffb444bf9-r8sdt   kubernetes-dashboard
	ea77073cf6822       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   cbae2590813b1       kubernetes-dashboard-855c9754f9-h8dsr        kubernetes-dashboard
	4ddb4fef3b268       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   46e147c13dbbe       coredns-66bc5c9577-6rf8b                     kube-system
	948cfa9c0288e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   30855e7c5943d       busybox                                      default
	d7ad380eee52f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   b98d1eadce81f       storage-provisioner                          kube-system
	cb1254843ac79       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   ab12322b43378       kube-proxy-8lcjb                             kube-system
	78070053967b8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   21b8bb8dac695       kindnet-df7sw                                kube-system
	847fba8996ed9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   118d730151ffc       kube-apiserver-embed-certs-618070            kube-system
	86afaef5fe911       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   26f7239c3afb0       kube-scheduler-embed-certs-618070            kube-system
	0d9c776cc885a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   4571a7235709f       kube-controller-manager-embed-certs-618070   kube-system
	c991117973d3b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   553fbb2d8b8a3       etcd-embed-certs-618070                      kube-system
	
	
	==> coredns [4ddb4fef3b268154bc9e83ba2858fcb64e6baa4f2a44667a80d4995ab5d913ad] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37209 - 14588 "HINFO IN 5846744398119670203.618815735913311456. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.059267108s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-618070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-618070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=embed-certs-618070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_35_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:35:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-618070
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:37:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:37:38 +0000   Sat, 01 Nov 2025 10:35:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:37:38 +0000   Sat, 01 Nov 2025 10:35:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:37:38 +0000   Sat, 01 Nov 2025 10:35:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:37:38 +0000   Sat, 01 Nov 2025 10:36:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-618070
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                5139744f-7550-4fc5-8cfe-6439f928869a
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-6rf8b                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m24s
	  kube-system                 etcd-embed-certs-618070                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m32s
	  kube-system                 kindnet-df7sw                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-embed-certs-618070             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-embed-certs-618070    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-8lcjb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-embed-certs-618070             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-r8sdt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-h8dsr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Warning  CgroupV1                 2m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m43s (x8 over 2m43s)  kubelet          Node embed-certs-618070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node embed-certs-618070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m43s (x8 over 2m43s)  kubelet          Node embed-certs-618070 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node embed-certs-618070 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node embed-certs-618070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node embed-certs-618070 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m26s                  node-controller  Node embed-certs-618070 event: Registered Node embed-certs-618070 in Controller
	  Normal   NodeReady                103s                   kubelet          Node embed-certs-618070 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node embed-certs-618070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node embed-certs-618070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node embed-certs-618070 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                    node-controller  Node embed-certs-618070 event: Registered Node embed-certs-618070 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c991117973d3b82d813a55a1584524c2e3edded68d94536c0ddb1c66b64c56ff] <==
	{"level":"warn","ts":"2025-11-01T10:36:45.380422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.402258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.429221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.456645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.467330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.496397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.523104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.541853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.558112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.591577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.606039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.629957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.656990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.662689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.680633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.704470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.723808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.735846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.754085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.788759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.799250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.828615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.849999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.896086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.993856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55032","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:47 up  2:20,  0 user,  load average: 3.21, 4.04, 3.23
	Linux embed-certs-618070 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78070053967b8dc393db82612d91df7e0f712db3bfd50b12800aee9e57b0aa66] <==
	I1101 10:36:48.053529       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:36:48.056034       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:36:48.056182       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:36:48.056196       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:36:48.056211       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:36:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:36:48.295779       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:36:48.295808       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:36:48.295818       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:36:48.302668       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:37:18.296522       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:37:18.296658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:37:18.303242       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:37:18.303344       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:37:19.696606       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:37:19.696657       1 metrics.go:72] Registering metrics
	I1101 10:37:19.696753       1 controller.go:711] "Syncing nftables rules"
	I1101 10:37:28.295332       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:37:28.295374       1 main.go:301] handling current node
	I1101 10:37:38.297765       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:37:38.297797       1 main.go:301] handling current node
	
	
	==> kube-apiserver [847fba8996ed9a3711b5e855594bd200e40bf224b23742f55ae2e602d50b4764] <==
	I1101 10:36:46.939968       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:36:46.940128       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:36:46.940146       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:36:46.940152       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:36:46.940325       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:36:46.940333       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:36:46.940393       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:36:46.940401       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:36:46.965456       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:36:46.979079       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:36:46.984937       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:36:46.985117       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:36:47.002615       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1101 10:36:47.032698       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:36:47.456040       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:36:47.593283       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:36:47.686030       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:36:47.841501       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:36:47.981012       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:36:48.060605       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:36:48.243028       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.227.90"}
	I1101 10:36:48.271769       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.3.14"}
	I1101 10:36:50.498850       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:36:50.598488       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:36:50.750434       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0d9c776cc885a82d3e1aeb688d3f68459e11c2cfc0c5d107c9fb9b3792e020a1] <==
	I1101 10:36:50.206054       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:36:50.210634       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:36:50.213457       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:36:50.219685       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:36:50.225941       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:36:50.225940       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:36:50.227047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:36:50.228221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:36:50.232478       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:36:50.232591       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:36:50.236703       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:36:50.242374       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:36:50.243288       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:36:50.243325       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:36:50.243740       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:36:50.243784       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:36:50.243811       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:36:50.243816       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:36:50.243822       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:36:50.243841       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:36:50.243999       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:36:50.246852       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:36:50.249054       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:36:50.252365       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:36:50.258520       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [cb1254843ac79ef47142f6a8bc6ad54ed6322e797118f62073d8664938dddc43] <==
	I1101 10:36:48.372971       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:36:48.467286       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:36:48.567533       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:36:48.567572       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:36:48.567675       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:36:48.586294       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:36:48.586555       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:36:48.590483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:36:48.590816       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:36:48.590897       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:36:48.592114       1 config.go:200] "Starting service config controller"
	I1101 10:36:48.592184       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:36:48.592243       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:36:48.592272       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:36:48.592308       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:36:48.592335       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:36:48.593023       1 config.go:309] "Starting node config controller"
	I1101 10:36:48.596100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:36:48.596179       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:36:48.693814       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:36:48.693853       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:36:48.693906       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [86afaef5fe9119b7c4301a84ac984fdf305581ba783077b0ffb0cfb22ca22a7f] <==
	I1101 10:36:44.322732       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:36:46.636211       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:36:46.636330       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:36:46.636364       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:36:46.636404       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:36:47.078720       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:36:47.078770       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:36:47.105297       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:36:47.105336       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:36:47.105905       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:36:47.105979       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:36:47.205593       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:36:50 embed-certs-618070 kubelet[774]: I1101 10:36:50.968890     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8936c3f0-ba9d-4810-aab8-12f7e79df6f0-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-h8dsr\" (UID: \"8936c3f0-ba9d-4810-aab8-12f7e79df6f0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h8dsr"
	Nov 01 10:36:50 embed-certs-618070 kubelet[774]: I1101 10:36:50.969503     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvdtl\" (UniqueName: \"kubernetes.io/projected/8936c3f0-ba9d-4810-aab8-12f7e79df6f0-kube-api-access-cvdtl\") pod \"kubernetes-dashboard-855c9754f9-h8dsr\" (UID: \"8936c3f0-ba9d-4810-aab8-12f7e79df6f0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h8dsr"
	Nov 01 10:36:50 embed-certs-618070 kubelet[774]: I1101 10:36:50.969553     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/36fad9a6-56d4-47f6-b258-fdc01bc261b1-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-r8sdt\" (UID: \"36fad9a6-56d4-47f6-b258-fdc01bc261b1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt"
	Nov 01 10:36:50 embed-certs-618070 kubelet[774]: I1101 10:36:50.969582     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw85p\" (UniqueName: \"kubernetes.io/projected/36fad9a6-56d4-47f6-b258-fdc01bc261b1-kube-api-access-tw85p\") pod \"dashboard-metrics-scraper-6ffb444bf9-r8sdt\" (UID: \"36fad9a6-56d4-47f6-b258-fdc01bc261b1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt"
	Nov 01 10:36:55 embed-certs-618070 kubelet[774]: I1101 10:36:55.683317     774 scope.go:117] "RemoveContainer" containerID="e0b8395c460dea5120c27e5a43f41336e318b382f19d13ea76a21c98a7d4d3d7"
	Nov 01 10:36:56 embed-certs-618070 kubelet[774]: I1101 10:36:56.694864     774 scope.go:117] "RemoveContainer" containerID="c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344"
	Nov 01 10:36:56 embed-certs-618070 kubelet[774]: I1101 10:36:56.695808     774 scope.go:117] "RemoveContainer" containerID="e0b8395c460dea5120c27e5a43f41336e318b382f19d13ea76a21c98a7d4d3d7"
	Nov 01 10:36:56 embed-certs-618070 kubelet[774]: E1101 10:36:56.706040     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:36:57 embed-certs-618070 kubelet[774]: I1101 10:36:57.698602     774 scope.go:117] "RemoveContainer" containerID="c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344"
	Nov 01 10:36:57 embed-certs-618070 kubelet[774]: E1101 10:36:57.698758     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:37:01 embed-certs-618070 kubelet[774]: I1101 10:37:01.152292     774 scope.go:117] "RemoveContainer" containerID="c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344"
	Nov 01 10:37:01 embed-certs-618070 kubelet[774]: E1101 10:37:01.153064     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:37:15 embed-certs-618070 kubelet[774]: I1101 10:37:15.536199     774 scope.go:117] "RemoveContainer" containerID="c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344"
	Nov 01 10:37:15 embed-certs-618070 kubelet[774]: I1101 10:37:15.754877     774 scope.go:117] "RemoveContainer" containerID="c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344"
	Nov 01 10:37:15 embed-certs-618070 kubelet[774]: I1101 10:37:15.755188     774 scope.go:117] "RemoveContainer" containerID="1dc34fc298773848f5e6db5f9c2638ed705f08a90dea9b703b2d5fce5b2d9be9"
	Nov 01 10:37:15 embed-certs-618070 kubelet[774]: E1101 10:37:15.755332     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:37:15 embed-certs-618070 kubelet[774]: I1101 10:37:15.808490     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h8dsr" podStartSLOduration=16.234466183 podStartE2EDuration="25.808471818s" podCreationTimestamp="2025-11-01 10:36:50 +0000 UTC" firstStartedPulling="2025-11-01 10:36:51.205214534 +0000 UTC m=+11.977228209" lastFinishedPulling="2025-11-01 10:37:00.779220178 +0000 UTC m=+21.551233844" observedRunningTime="2025-11-01 10:37:01.726097672 +0000 UTC m=+22.498111347" watchObservedRunningTime="2025-11-01 10:37:15.808471818 +0000 UTC m=+36.580485559"
	Nov 01 10:37:18 embed-certs-618070 kubelet[774]: I1101 10:37:18.765663     774 scope.go:117] "RemoveContainer" containerID="d7ad380eee52f1fa60c6c143c18da47989d61aaba821322c0187925c8fde79af"
	Nov 01 10:37:21 embed-certs-618070 kubelet[774]: I1101 10:37:21.152597     774 scope.go:117] "RemoveContainer" containerID="1dc34fc298773848f5e6db5f9c2638ed705f08a90dea9b703b2d5fce5b2d9be9"
	Nov 01 10:37:21 embed-certs-618070 kubelet[774]: E1101 10:37:21.152795     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:37:34 embed-certs-618070 kubelet[774]: I1101 10:37:34.536557     774 scope.go:117] "RemoveContainer" containerID="1dc34fc298773848f5e6db5f9c2638ed705f08a90dea9b703b2d5fce5b2d9be9"
	Nov 01 10:37:34 embed-certs-618070 kubelet[774]: E1101 10:37:34.536764     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:37:44 embed-certs-618070 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:37:44 embed-certs-618070 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:37:44 embed-certs-618070 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ea77073cf682212cbdff314bf42c52e6de94d41c312dd4240d84ecac9abeb1b9] <==
	2025/11/01 10:37:00 Using namespace: kubernetes-dashboard
	2025/11/01 10:37:00 Using in-cluster config to connect to apiserver
	2025/11/01 10:37:00 Using secret token for csrf signing
	2025/11/01 10:37:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:37:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:37:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:37:00 Generating JWE encryption key
	2025/11/01 10:37:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:37:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:37:01 Initializing JWE encryption key from synchronized object
	2025/11/01 10:37:01 Creating in-cluster Sidecar client
	2025/11/01 10:37:01 Serving insecurely on HTTP port: 9090
	2025/11/01 10:37:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:37:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:37:00 Starting overwatch
	
	
	==> storage-provisioner [8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0] <==
	I1101 10:37:18.859657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:37:18.859826       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:37:18.862182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:22.323880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:26.584506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:30.183120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:33.236491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:36.259564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:36.266000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:37:36.266149       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:37:36.266321       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-618070_0fe5a5f0-7d4c-4d4d-b577-c3009a21fd5d!
	I1101 10:37:36.267220       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba9fae06-5ee5-464b-964a-84fa8bc80eb0", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-618070_0fe5a5f0-7d4c-4d4d-b577-c3009a21fd5d became leader
	W1101 10:37:36.281049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:36.286086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:37:36.366719       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-618070_0fe5a5f0-7d4c-4d4d-b577-c3009a21fd5d!
	W1101 10:37:38.289401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:38.295154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:40.300152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:40.308262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:42.312648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:42.321362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:44.325278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:44.335047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:46.342917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:46.348307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d7ad380eee52f1fa60c6c143c18da47989d61aaba821322c0187925c8fde79af] <==
	I1101 10:36:48.281577       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:37:18.284056       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-618070 -n embed-certs-618070
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-618070 -n embed-certs-618070: exit status 2 (540.82254ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-618070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-618070
helpers_test.go:243: (dbg) docker inspect embed-certs-618070:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79",
	        "Created": "2025-11-01T10:34:43.970958066Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:36:31.345561077Z",
	            "FinishedAt": "2025-11-01T10:36:30.239715484Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/hosts",
	        "LogPath": "/var/lib/docker/containers/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79/5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79-json.log",
	        "Name": "/embed-certs-618070",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-618070:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-618070",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b2cdd451242e2b76c9aecfd710deb21402a386b7c61e98697c9a8a12d47bd79",
	                "LowerDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e391c747e4a6396812f64520c631c0256d5792198919f8560482efe9279b290d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-618070",
	                "Source": "/var/lib/docker/volumes/embed-certs-618070/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-618070",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-618070",
	                "name.minikube.sigs.k8s.io": "embed-certs-618070",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fce7d5de5836fa59d3fd7a28444fdd7d2e97908deea8834387bf40c4f458c701",
	            "SandboxKey": "/var/run/docker/netns/fce7d5de5836",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-618070": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:2a:1d:d0:21:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2a9320fc77e2ab7eae746fc7f855e8764c40a6520ae3423667b1ef82153e035d",
	                    "EndpointID": "d0dc8b75614ded11a8f71f7ba0da95bfc0066b108fd01a65c618c8261b9bbea0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-618070",
	                        "5b2cdd451242"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-618070 -n embed-certs-618070
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-618070 -n embed-certs-618070: exit status 2 (472.509496ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-618070 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-618070 logs -n 25: (1.614386906s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-459318 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-459318       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ image   │ old-k8s-version-180313 image list --format=json                                                                                                                                                                                               │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ pause   │ -p old-k8s-version-180313 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │                     │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:35 UTC │
	│ delete  │ -p cert-expiration-459318                                                                                                                                                                                                                     │ cert-expiration-459318       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-170467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-170467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ stop    │ -p embed-certs-618070 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-618070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ image   │ no-preload-170467 image list --format=json                                                                                                                                                                                                    │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p no-preload-170467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p disable-driver-mounts-416512                                                                                                                                                                                                               │ disable-driver-mounts-416512 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ image   │ embed-certs-618070 image list --format=json                                                                                                                                                                                                   │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-618070 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:37:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:37:25.562826  477629 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:37:25.563015  477629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:37:25.563027  477629 out.go:374] Setting ErrFile to fd 2...
	I1101 10:37:25.563033  477629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:37:25.563295  477629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:37:25.563805  477629 out.go:368] Setting JSON to false
	I1101 10:37:25.564854  477629 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8395,"bootTime":1761985051,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:37:25.564927  477629 start.go:143] virtualization:  
	I1101 10:37:25.568668  477629 out.go:179] * [default-k8s-diff-port-245904] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:37:25.572783  477629 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:37:25.572864  477629 notify.go:221] Checking for updates...
	I1101 10:37:25.578909  477629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:37:25.581800  477629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:37:25.585370  477629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:37:25.588363  477629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:37:25.591353  477629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:37:25.594867  477629 config.go:182] Loaded profile config "embed-certs-618070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:37:25.595034  477629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:37:25.619323  477629 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:37:25.619451  477629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:37:25.687563  477629 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:37:25.678904905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:37:25.687667  477629 docker.go:319] overlay module found
	I1101 10:37:25.690847  477629 out.go:179] * Using the docker driver based on user configuration
	I1101 10:37:25.693787  477629 start.go:309] selected driver: docker
	I1101 10:37:25.693808  477629 start.go:930] validating driver "docker" against <nil>
	I1101 10:37:25.693823  477629 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:37:25.694565  477629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:37:25.754449  477629 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:37:25.745211519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:37:25.754606  477629 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:37:25.754849  477629 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:37:25.757831  477629 out.go:179] * Using Docker driver with root privileges
	I1101 10:37:25.760509  477629 cni.go:84] Creating CNI manager for ""
	I1101 10:37:25.760569  477629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:37:25.760584  477629 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:37:25.760682  477629 start.go:353] cluster config:
	{Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:37:25.763935  477629 out.go:179] * Starting "default-k8s-diff-port-245904" primary control-plane node in "default-k8s-diff-port-245904" cluster
	I1101 10:37:25.766687  477629 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:37:25.769586  477629 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:37:25.772486  477629 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:37:25.772538  477629 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:37:25.772551  477629 cache.go:59] Caching tarball of preloaded images
	I1101 10:37:25.772582  477629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:37:25.772640  477629 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:37:25.772650  477629 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:37:25.772764  477629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/config.json ...
	I1101 10:37:25.772782  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/config.json: {Name:mkbca565f403e0cdd3933dcfff8dbc334db598ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:25.792021  477629 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:37:25.792046  477629 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:37:25.792063  477629 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:37:25.792094  477629 start.go:360] acquireMachinesLock for default-k8s-diff-port-245904: {Name:mkd19cff2a35f3bd59a365809e4cb064a7918a80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:37:25.792224  477629 start.go:364] duration metric: took 107.407µs to acquireMachinesLock for "default-k8s-diff-port-245904"
	I1101 10:37:25.792256  477629 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:37:25.792330  477629 start.go:125] createHost starting for "" (driver="docker")
	W1101 10:37:21.903020  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:24.388323  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	I1101 10:37:25.797490  477629 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:37:25.797739  477629 start.go:159] libmachine.API.Create for "default-k8s-diff-port-245904" (driver="docker")
	I1101 10:37:25.797785  477629 client.go:173] LocalClient.Create starting
	I1101 10:37:25.797858  477629 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem
	I1101 10:37:25.797896  477629 main.go:143] libmachine: Decoding PEM data...
	I1101 10:37:25.797912  477629 main.go:143] libmachine: Parsing certificate...
	I1101 10:37:25.797982  477629 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem
	I1101 10:37:25.798009  477629 main.go:143] libmachine: Decoding PEM data...
	I1101 10:37:25.798023  477629 main.go:143] libmachine: Parsing certificate...
	I1101 10:37:25.798389  477629 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-245904 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:37:25.814811  477629 cli_runner.go:211] docker network inspect default-k8s-diff-port-245904 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:37:25.814912  477629 network_create.go:284] running [docker network inspect default-k8s-diff-port-245904] to gather additional debugging logs...
	I1101 10:37:25.814936  477629 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-245904
	W1101 10:37:25.830398  477629 cli_runner.go:211] docker network inspect default-k8s-diff-port-245904 returned with exit code 1
	I1101 10:37:25.830433  477629 network_create.go:287] error running [docker network inspect default-k8s-diff-port-245904]: docker network inspect default-k8s-diff-port-245904: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-245904 not found
	I1101 10:37:25.830449  477629 network_create.go:289] output of [docker network inspect default-k8s-diff-port-245904]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-245904 not found
	
	** /stderr **
	I1101 10:37:25.830567  477629 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:37:25.847443  477629 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b4026c1b0063 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ce:bd:30:c3:d1} reservation:<nil>}
	I1101 10:37:25.847832  477629 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e394bead07b9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:98:c6:36:ba:b7} reservation:<nil>}
	I1101 10:37:25.848058  477629 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bd8719a80444 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:75:48:52:a5:ee} reservation:<nil>}
	I1101 10:37:25.848518  477629 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cb8b0}
	I1101 10:37:25.848540  477629 network_create.go:124] attempt to create docker network default-k8s-diff-port-245904 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 10:37:25.848595  477629 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-245904 default-k8s-diff-port-245904
	I1101 10:37:25.918303  477629 network_create.go:108] docker network default-k8s-diff-port-245904 192.168.76.0/24 created
	I1101 10:37:25.918334  477629 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-245904" container
	I1101 10:37:25.918426  477629 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:37:25.934484  477629 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-245904 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-245904 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:37:25.957220  477629 oci.go:103] Successfully created a docker volume default-k8s-diff-port-245904
	I1101 10:37:25.957310  477629 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-245904-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-245904 --entrypoint /usr/bin/test -v default-k8s-diff-port-245904:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:37:26.511242  477629 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-245904
	I1101 10:37:26.511309  477629 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:37:26.511329  477629 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:37:26.511409  477629 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-245904:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 10:37:26.887304  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	W1101 10:37:29.386979  473779 pod_ready.go:104] pod "coredns-66bc5c9577-6rf8b" is not "Ready", error: <nil>
	I1101 10:37:29.886886  473779 pod_ready.go:94] pod "coredns-66bc5c9577-6rf8b" is "Ready"
	I1101 10:37:29.886915  473779 pod_ready.go:86] duration metric: took 41.005993929s for pod "coredns-66bc5c9577-6rf8b" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:29.890127  473779 pod_ready.go:83] waiting for pod "etcd-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:29.894682  473779 pod_ready.go:94] pod "etcd-embed-certs-618070" is "Ready"
	I1101 10:37:29.894705  473779 pod_ready.go:86] duration metric: took 4.553384ms for pod "etcd-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:29.896903  473779 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:29.902093  473779 pod_ready.go:94] pod "kube-apiserver-embed-certs-618070" is "Ready"
	I1101 10:37:29.902117  473779 pod_ready.go:86] duration metric: took 5.145519ms for pod "kube-apiserver-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:29.904328  473779 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:30.088227  473779 pod_ready.go:94] pod "kube-controller-manager-embed-certs-618070" is "Ready"
	I1101 10:37:30.088264  473779 pod_ready.go:86] duration metric: took 183.915455ms for pod "kube-controller-manager-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:30.286127  473779 pod_ready.go:83] waiting for pod "kube-proxy-8lcjb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:30.685413  473779 pod_ready.go:94] pod "kube-proxy-8lcjb" is "Ready"
	I1101 10:37:30.685482  473779 pod_ready.go:86] duration metric: took 399.278243ms for pod "kube-proxy-8lcjb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:30.884757  473779 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:31.285099  473779 pod_ready.go:94] pod "kube-scheduler-embed-certs-618070" is "Ready"
	I1101 10:37:31.285128  473779 pod_ready.go:86] duration metric: took 400.344074ms for pod "kube-scheduler-embed-certs-618070" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:37:31.285140  473779 pod_ready.go:40] duration metric: took 42.40792431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:37:31.359935  473779 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:37:31.362681  473779 out.go:179] * Done! kubectl is now configured to use "embed-certs-618070" cluster and "default" namespace by default
	I1101 10:37:30.954227  477629 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-245904:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.442760705s)
	I1101 10:37:30.954257  477629 kic.go:203] duration metric: took 4.442924688s to extract preloaded images to volume ...
	W1101 10:37:30.954410  477629 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:37:30.954535  477629 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:37:31.019726  477629 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-245904 --name default-k8s-diff-port-245904 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-245904 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-245904 --network default-k8s-diff-port-245904 --ip 192.168.76.2 --volume default-k8s-diff-port-245904:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:37:31.359625  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Running}}
	I1101 10:37:31.416214  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:37:31.459568  477629 cli_runner.go:164] Run: docker exec default-k8s-diff-port-245904 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:37:31.528891  477629 oci.go:144] the created container "default-k8s-diff-port-245904" has a running status.
	I1101 10:37:31.528917  477629 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa...
	I1101 10:37:31.760241  477629 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:37:31.794864  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:37:31.816807  477629 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:37:31.816829  477629 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-245904 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:37:31.890735  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:37:32.001842  477629 machine.go:94] provisionDockerMachine start ...
	I1101 10:37:32.001945  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:32.031997  477629 main.go:143] libmachine: Using SSH client type: native
	I1101 10:37:32.032324  477629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1101 10:37:32.032334  477629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:37:32.033054  477629 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:37:35.189425  477629 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-245904
	
	I1101 10:37:35.189450  477629 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-245904"
	I1101 10:37:35.189523  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:35.207602  477629 main.go:143] libmachine: Using SSH client type: native
	I1101 10:37:35.207895  477629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1101 10:37:35.207907  477629 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-245904 && echo "default-k8s-diff-port-245904" | sudo tee /etc/hostname
	I1101 10:37:35.368860  477629 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-245904
	
	I1101 10:37:35.368940  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:35.387368  477629 main.go:143] libmachine: Using SSH client type: native
	I1101 10:37:35.387694  477629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1101 10:37:35.387718  477629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-245904' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-245904/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-245904' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:37:35.538514  477629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:37:35.538598  477629 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:37:35.538624  477629 ubuntu.go:190] setting up certificates
	I1101 10:37:35.538642  477629 provision.go:84] configureAuth start
	I1101 10:37:35.538706  477629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:37:35.556283  477629 provision.go:143] copyHostCerts
	I1101 10:37:35.556353  477629 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:37:35.556368  477629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:37:35.556450  477629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:37:35.556557  477629 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:37:35.556569  477629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:37:35.556598  477629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:37:35.556656  477629 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:37:35.556665  477629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:37:35.556692  477629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:37:35.556750  477629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-245904 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-245904 localhost minikube]
	I1101 10:37:36.671618  477629 provision.go:177] copyRemoteCerts
	I1101 10:37:36.671694  477629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:37:36.671736  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:36.688743  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:37:36.797541  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:37:36.816854  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:37:36.835673  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 10:37:36.853299  477629 provision.go:87] duration metric: took 1.314620839s to configureAuth
	I1101 10:37:36.853328  477629 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:37:36.853557  477629 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:37:36.853670  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:36.870961  477629 main.go:143] libmachine: Using SSH client type: native
	I1101 10:37:36.871270  477629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1101 10:37:36.871290  477629 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:37:37.208099  477629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:37:37.208162  477629 machine.go:97] duration metric: took 5.206296914s to provisionDockerMachine
	I1101 10:37:37.208178  477629 client.go:176] duration metric: took 11.410382914s to LocalClient.Create
	I1101 10:37:37.208196  477629 start.go:167] duration metric: took 11.410459141s to libmachine.API.Create "default-k8s-diff-port-245904"
	I1101 10:37:37.208205  477629 start.go:293] postStartSetup for "default-k8s-diff-port-245904" (driver="docker")
	I1101 10:37:37.208218  477629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:37:37.208297  477629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:37:37.208338  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:37.227311  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:37:37.333812  477629 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:37:37.337068  477629 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:37:37.337100  477629 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:37:37.337111  477629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:37:37.337165  477629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:37:37.337252  477629 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:37:37.337358  477629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:37:37.344780  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:37:37.362628  477629 start.go:296] duration metric: took 154.405965ms for postStartSetup
	I1101 10:37:37.362995  477629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:37:37.379341  477629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/config.json ...
	I1101 10:37:37.379625  477629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:37:37.379676  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:37.395873  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:37:37.498853  477629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:37:37.503889  477629 start.go:128] duration metric: took 11.711543742s to createHost
	I1101 10:37:37.503918  477629 start.go:83] releasing machines lock for "default-k8s-diff-port-245904", held for 11.711680352s
	I1101 10:37:37.503992  477629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:37:37.521397  477629 ssh_runner.go:195] Run: cat /version.json
	I1101 10:37:37.521456  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:37.521740  477629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:37:37.521812  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:37:37.548964  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:37:37.549503  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:37:37.649305  477629 ssh_runner.go:195] Run: systemctl --version
	I1101 10:37:37.745181  477629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:37:37.781278  477629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:37:37.786348  477629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:37:37.786424  477629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:37:37.817133  477629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:37:37.817199  477629 start.go:496] detecting cgroup driver to use...
	I1101 10:37:37.817253  477629 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:37:37.817335  477629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:37:37.836773  477629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:37:37.849479  477629 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:37:37.849566  477629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:37:37.867615  477629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:37:37.886293  477629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:37:38.011881  477629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:37:38.151133  477629 docker.go:234] disabling docker service ...
	I1101 10:37:38.151213  477629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:37:38.176137  477629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:37:38.188862  477629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:37:38.301904  477629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:37:38.419151  477629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:37:38.432506  477629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:37:38.455963  477629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:37:38.456052  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.465219  477629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:37:38.465306  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.475043  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.484075  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.493280  477629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:37:38.501825  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.511421  477629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.524961  477629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:37:38.534125  477629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:37:38.541900  477629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:37:38.549540  477629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:37:38.668046  477629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:37:38.802962  477629 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:37:38.803099  477629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:37:38.807129  477629 start.go:564] Will wait 60s for crictl version
	I1101 10:37:38.807243  477629 ssh_runner.go:195] Run: which crictl
	I1101 10:37:38.811442  477629 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:37:38.840925  477629 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:37:38.841088  477629 ssh_runner.go:195] Run: crio --version
	I1101 10:37:38.871757  477629 ssh_runner.go:195] Run: crio --version
	I1101 10:37:38.910884  477629 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:37:38.913823  477629 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-245904 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:37:38.930725  477629 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:37:38.934739  477629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:37:38.945145  477629 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:37:38.945258  477629 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:37:38.945319  477629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:37:38.980962  477629 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:37:38.980984  477629 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:37:38.981038  477629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:37:39.007951  477629 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:37:39.008028  477629 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:37:39.008052  477629 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1101 10:37:39.008178  477629 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-245904 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:37:39.008312  477629 ssh_runner.go:195] Run: crio config
	I1101 10:37:39.072640  477629 cni.go:84] Creating CNI manager for ""
	I1101 10:37:39.072663  477629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:37:39.072698  477629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:37:39.072729  477629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-245904 NodeName:default-k8s-diff-port-245904 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:37:39.072914  477629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-245904"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:37:39.073027  477629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:37:39.081512  477629 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:37:39.081611  477629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:37:39.089758  477629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 10:37:39.103560  477629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:37:39.117987  477629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 10:37:39.131533  477629 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:37:39.135678  477629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:37:39.146102  477629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:37:39.263798  477629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:37:39.280722  477629 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904 for IP: 192.168.76.2
	I1101 10:37:39.280788  477629 certs.go:195] generating shared ca certs ...
	I1101 10:37:39.280820  477629 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:39.280992  477629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:37:39.281058  477629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:37:39.281079  477629 certs.go:257] generating profile certs ...
	I1101 10:37:39.281169  477629 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.key
	I1101 10:37:39.281204  477629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt with IP's: []
	I1101 10:37:39.645855  477629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt ...
	I1101 10:37:39.645929  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: {Name:mk8ad628d37fdb588d82aafbedc4619d7f9478f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:39.646176  477629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.key ...
	I1101 10:37:39.646217  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.key: {Name:mk8759444d5de8993a166f80ca979d8d746bc17f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:39.646379  477629 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key.52ff7e67
	I1101 10:37:39.646428  477629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt.52ff7e67 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:37:40.116771  477629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt.52ff7e67 ...
	I1101 10:37:40.116814  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt.52ff7e67: {Name:mkb380ade971a21de38d629b2ad10318230261ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:40.117017  477629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key.52ff7e67 ...
	I1101 10:37:40.117033  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key.52ff7e67: {Name:mk21f97ccca821c56a550cbec2cb59c013577e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:40.117127  477629 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt.52ff7e67 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt
	I1101 10:37:40.117210  477629 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key.52ff7e67 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key
	I1101 10:37:40.117278  477629 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key
	I1101 10:37:40.117297  477629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.crt with IP's: []
	I1101 10:37:40.785324  477629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.crt ...
	I1101 10:37:40.785356  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.crt: {Name:mk6da3beeb913cc956cfb118470f557021fbdb0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:40.785546  477629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key ...
	I1101 10:37:40.785561  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key: {Name:mk0212b138bf862420a8b0c822c293a15aef2949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:40.785783  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:37:40.785825  477629 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:37:40.785839  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:37:40.785862  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:37:40.785891  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:37:40.785917  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:37:40.785969  477629 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:37:40.786579  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:37:40.806739  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:37:40.825980  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:37:40.845502  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:37:40.865494  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 10:37:40.885841  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:37:40.918222  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:37:40.939589  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:37:40.963506  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:37:40.987453  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:37:41.008562  477629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:37:41.029902  477629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:37:41.044594  477629 ssh_runner.go:195] Run: openssl version
	I1101 10:37:41.051365  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:37:41.060532  477629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:37:41.064447  477629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:37:41.064511  477629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:37:41.108196  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:37:41.116800  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:37:41.125460  477629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:37:41.129580  477629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:37:41.129685  477629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:37:41.171448  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:37:41.180058  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:37:41.188485  477629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:37:41.192453  477629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:37:41.192525  477629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:37:41.233620  477629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:37:41.242477  477629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:37:41.246674  477629 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:37:41.246737  477629 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:37:41.246820  477629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:37:41.246887  477629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:37:41.276908  477629 cri.go:89] found id: ""
	I1101 10:37:41.276987  477629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:37:41.285368  477629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:37:41.293794  477629 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:37:41.293871  477629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:37:41.301889  477629 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:37:41.301911  477629 kubeadm.go:158] found existing configuration files:
	
	I1101 10:37:41.301969  477629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 10:37:41.310305  477629 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:37:41.310374  477629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:37:41.318338  477629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 10:37:41.326656  477629 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:37:41.326719  477629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:37:41.334845  477629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 10:37:41.342647  477629 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:37:41.342721  477629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:37:41.350213  477629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 10:37:41.358700  477629 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:37:41.358786  477629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:37:41.366737  477629 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:37:41.407955  477629 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:37:41.408106  477629 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:37:41.432289  477629 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:37:41.432480  477629 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:37:41.432558  477629 kubeadm.go:319] OS: Linux
	I1101 10:37:41.432642  477629 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:37:41.432758  477629 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:37:41.432856  477629 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:37:41.432958  477629 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:37:41.433066  477629 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:37:41.433169  477629 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:37:41.433230  477629 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:37:41.433290  477629 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:37:41.433342  477629 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:37:41.506314  477629 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:37:41.506433  477629 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:37:41.506543  477629 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:37:41.514603  477629 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:37:41.518890  477629 out.go:252]   - Generating certificates and keys ...
	I1101 10:37:41.519008  477629 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:37:41.519088  477629 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:37:41.701864  477629 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:37:42.108045  477629 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:37:42.329198  477629 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:37:43.237938  477629 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:37:43.646493  477629 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:37:43.646667  477629 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-245904 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:37:44.644157  477629 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:37:44.644628  477629 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-245904 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:37:45.173531  477629 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:37:45.376567  477629 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:37:46.031328  477629 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:37:46.032350  477629 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:37:46.677798  477629 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:37:47.287597  477629 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:37:47.857017  477629 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:37:48.327708  477629 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:37:48.599587  477629 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:37:48.600638  477629 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:37:48.607236  477629 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Nov 01 10:37:15 embed-certs-618070 crio[651]: time="2025-11-01T10:37:15.784238197Z" level=info msg="Removed container c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt/dashboard-metrics-scraper" id=578055da-8e2b-4514-b72c-f4426e9111c3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:37:18 embed-certs-618070 conmon[1145]: conmon d7ad380eee52f1fa60c6 <ninfo>: container 1148 exited with status 1
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.766214149Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8102f4a7-6fdc-4b59-8c54-7802acc01953 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.767367147Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8634e728-10b9-4732-9133-1be7aba9445f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.770801126Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=78dbd01f-a039-44f5-872d-3d3c4981ef71 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.770941092Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.779817868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.780009354Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8d56b0753ac1dcae3f5ad2a500ffb12922845de70c01293535daefce542aaa83/merged/etc/passwd: no such file or directory"
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.780032518Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8d56b0753ac1dcae3f5ad2a500ffb12922845de70c01293535daefce542aaa83/merged/etc/group: no such file or directory"
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.780335096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.812020078Z" level=info msg="Created container 8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0: kube-system/storage-provisioner/storage-provisioner" id=78dbd01f-a039-44f5-872d-3d3c4981ef71 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.81547105Z" level=info msg="Starting container: 8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0" id=92751592-037e-4e71-b472-607e509bc8d3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:37:18 embed-certs-618070 crio[651]: time="2025-11-01T10:37:18.820159937Z" level=info msg="Started container" PID=1643 containerID=8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0 description=kube-system/storage-provisioner/storage-provisioner id=92751592-037e-4e71-b472-607e509bc8d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b98d1eadce81f1315badca97c9029b3bb86a4e8227522c15dfa5a04e245913e5
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.295639891Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.301130475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.301166439Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.301191752Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.305359496Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.305515831Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.305600288Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.309130621Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.309282075Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.309356899Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.314345483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:37:28 embed-certs-618070 crio[651]: time="2025-11-01T10:37:28.314516777Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8c65bde628c4d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           31 seconds ago       Running             storage-provisioner         2                   b98d1eadce81f       storage-provisioner                          kube-system
	1dc34fc298773       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           34 seconds ago       Exited              dashboard-metrics-scraper   2                   990603bcc3246       dashboard-metrics-scraper-6ffb444bf9-r8sdt   kubernetes-dashboard
	ea77073cf6822       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   49 seconds ago       Running             kubernetes-dashboard        0                   cbae2590813b1       kubernetes-dashboard-855c9754f9-h8dsr        kubernetes-dashboard
	4ddb4fef3b268       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   46e147c13dbbe       coredns-66bc5c9577-6rf8b                     kube-system
	948cfa9c0288e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   30855e7c5943d       busybox                                      default
	d7ad380eee52f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   b98d1eadce81f       storage-provisioner                          kube-system
	cb1254843ac79       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   ab12322b43378       kube-proxy-8lcjb                             kube-system
	78070053967b8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   21b8bb8dac695       kindnet-df7sw                                kube-system
	847fba8996ed9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   118d730151ffc       kube-apiserver-embed-certs-618070            kube-system
	86afaef5fe911       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   26f7239c3afb0       kube-scheduler-embed-certs-618070            kube-system
	0d9c776cc885a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   4571a7235709f       kube-controller-manager-embed-certs-618070   kube-system
	c991117973d3b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   553fbb2d8b8a3       etcd-embed-certs-618070                      kube-system
	
	
	==> coredns [4ddb4fef3b268154bc9e83ba2858fcb64e6baa4f2a44667a80d4995ab5d913ad] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37209 - 14588 "HINFO IN 5846744398119670203.618815735913311456. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.059267108s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-618070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-618070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=embed-certs-618070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_35_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:35:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-618070
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:37:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:37:38 +0000   Sat, 01 Nov 2025 10:35:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:37:38 +0000   Sat, 01 Nov 2025 10:35:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:37:38 +0000   Sat, 01 Nov 2025 10:35:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:37:38 +0000   Sat, 01 Nov 2025 10:36:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-618070
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                5139744f-7550-4fc5-8cfe-6439f928869a
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-6rf8b                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m27s
	  kube-system                 etcd-embed-certs-618070                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m35s
	  kube-system                 kindnet-df7sw                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m28s
	  kube-system                 kube-apiserver-embed-certs-618070             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-controller-manager-embed-certs-618070    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-proxy-8lcjb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-scheduler-embed-certs-618070             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-r8sdt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-h8dsr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m26s                  kube-proxy       
	  Normal   Starting                 61s                    kube-proxy       
	  Warning  CgroupV1                 2m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node embed-certs-618070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node embed-certs-618070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x8 over 2m46s)  kubelet          Node embed-certs-618070 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m33s                  kubelet          Node embed-certs-618070 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m33s                  kubelet          Node embed-certs-618070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s                  kubelet          Node embed-certs-618070 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m29s                  node-controller  Node embed-certs-618070 event: Registered Node embed-certs-618070 in Controller
	  Normal   NodeReady                106s                   kubelet          Node embed-certs-618070 status is now: NodeReady
	  Normal   Starting                 71s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)      kubelet          Node embed-certs-618070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)      kubelet          Node embed-certs-618070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)      kubelet          Node embed-certs-618070 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                    node-controller  Node embed-certs-618070 event: Registered Node embed-certs-618070 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:14] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:15] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c991117973d3b82d813a55a1584524c2e3edded68d94536c0ddb1c66b64c56ff] <==
	{"level":"warn","ts":"2025-11-01T10:36:45.380422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.402258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.429221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.456645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.467330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.496397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.523104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.541853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.558112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.591577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.606039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.629957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.656990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.662689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.680633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.704470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.723808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.735846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.754085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.788759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.799250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.828615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.849999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.896086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:36:45.993856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55032","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:50 up  2:20,  0 user,  load average: 3.35, 4.06, 3.24
	Linux embed-certs-618070 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78070053967b8dc393db82612d91df7e0f712db3bfd50b12800aee9e57b0aa66] <==
	I1101 10:36:48.053529       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:36:48.056034       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:36:48.056182       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:36:48.056196       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:36:48.056211       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:36:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:36:48.295779       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:36:48.295808       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:36:48.295818       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:36:48.302668       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:37:18.296522       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:37:18.296658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:37:18.303242       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:37:18.303344       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:37:19.696606       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:37:19.696657       1 metrics.go:72] Registering metrics
	I1101 10:37:19.696753       1 controller.go:711] "Syncing nftables rules"
	I1101 10:37:28.295332       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:37:28.295374       1 main.go:301] handling current node
	I1101 10:37:38.297765       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:37:38.297797       1 main.go:301] handling current node
	I1101 10:37:48.300224       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:37:48.300255       1 main.go:301] handling current node
	
	
	==> kube-apiserver [847fba8996ed9a3711b5e855594bd200e40bf224b23742f55ae2e602d50b4764] <==
	I1101 10:36:46.939968       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:36:46.940128       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:36:46.940146       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:36:46.940152       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:36:46.940325       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:36:46.940333       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:36:46.940393       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:36:46.940401       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:36:46.965456       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:36:46.979079       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:36:46.984937       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:36:46.985117       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:36:47.002615       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1101 10:36:47.032698       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:36:47.456040       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:36:47.593283       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:36:47.686030       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:36:47.841501       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:36:47.981012       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:36:48.060605       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:36:48.243028       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.227.90"}
	I1101 10:36:48.271769       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.3.14"}
	I1101 10:36:50.498850       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:36:50.598488       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:36:50.750434       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0d9c776cc885a82d3e1aeb688d3f68459e11c2cfc0c5d107c9fb9b3792e020a1] <==
	I1101 10:36:50.206054       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:36:50.210634       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:36:50.213457       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:36:50.219685       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:36:50.225941       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:36:50.225940       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:36:50.227047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:36:50.228221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:36:50.232478       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:36:50.232591       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:36:50.236703       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:36:50.242374       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:36:50.243288       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:36:50.243325       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:36:50.243740       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:36:50.243784       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:36:50.243811       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:36:50.243816       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:36:50.243822       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:36:50.243841       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:36:50.243999       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:36:50.246852       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:36:50.249054       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:36:50.252365       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:36:50.258520       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [cb1254843ac79ef47142f6a8bc6ad54ed6322e797118f62073d8664938dddc43] <==
	I1101 10:36:48.372971       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:36:48.467286       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:36:48.567533       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:36:48.567572       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:36:48.567675       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:36:48.586294       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:36:48.586555       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:36:48.590483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:36:48.590816       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:36:48.590897       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:36:48.592114       1 config.go:200] "Starting service config controller"
	I1101 10:36:48.592184       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:36:48.592243       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:36:48.592272       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:36:48.592308       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:36:48.592335       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:36:48.593023       1 config.go:309] "Starting node config controller"
	I1101 10:36:48.596100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:36:48.596179       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:36:48.693814       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:36:48.693853       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:36:48.693906       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [86afaef5fe9119b7c4301a84ac984fdf305581ba783077b0ffb0cfb22ca22a7f] <==
	I1101 10:36:44.322732       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:36:46.636211       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:36:46.636330       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:36:46.636364       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:36:46.636404       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:36:47.078720       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:36:47.078770       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:36:47.105297       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:36:47.105336       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:36:47.105905       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:36:47.105979       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:36:47.205593       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:36:50 embed-certs-618070 kubelet[774]: I1101 10:36:50.968890     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8936c3f0-ba9d-4810-aab8-12f7e79df6f0-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-h8dsr\" (UID: \"8936c3f0-ba9d-4810-aab8-12f7e79df6f0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h8dsr"
	Nov 01 10:36:50 embed-certs-618070 kubelet[774]: I1101 10:36:50.969503     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvdtl\" (UniqueName: \"kubernetes.io/projected/8936c3f0-ba9d-4810-aab8-12f7e79df6f0-kube-api-access-cvdtl\") pod \"kubernetes-dashboard-855c9754f9-h8dsr\" (UID: \"8936c3f0-ba9d-4810-aab8-12f7e79df6f0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h8dsr"
	Nov 01 10:36:50 embed-certs-618070 kubelet[774]: I1101 10:36:50.969553     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/36fad9a6-56d4-47f6-b258-fdc01bc261b1-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-r8sdt\" (UID: \"36fad9a6-56d4-47f6-b258-fdc01bc261b1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt"
	Nov 01 10:36:50 embed-certs-618070 kubelet[774]: I1101 10:36:50.969582     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw85p\" (UniqueName: \"kubernetes.io/projected/36fad9a6-56d4-47f6-b258-fdc01bc261b1-kube-api-access-tw85p\") pod \"dashboard-metrics-scraper-6ffb444bf9-r8sdt\" (UID: \"36fad9a6-56d4-47f6-b258-fdc01bc261b1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt"
	Nov 01 10:36:55 embed-certs-618070 kubelet[774]: I1101 10:36:55.683317     774 scope.go:117] "RemoveContainer" containerID="e0b8395c460dea5120c27e5a43f41336e318b382f19d13ea76a21c98a7d4d3d7"
	Nov 01 10:36:56 embed-certs-618070 kubelet[774]: I1101 10:36:56.694864     774 scope.go:117] "RemoveContainer" containerID="c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344"
	Nov 01 10:36:56 embed-certs-618070 kubelet[774]: I1101 10:36:56.695808     774 scope.go:117] "RemoveContainer" containerID="e0b8395c460dea5120c27e5a43f41336e318b382f19d13ea76a21c98a7d4d3d7"
	Nov 01 10:36:56 embed-certs-618070 kubelet[774]: E1101 10:36:56.706040     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:36:57 embed-certs-618070 kubelet[774]: I1101 10:36:57.698602     774 scope.go:117] "RemoveContainer" containerID="c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344"
	Nov 01 10:36:57 embed-certs-618070 kubelet[774]: E1101 10:36:57.698758     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:37:01 embed-certs-618070 kubelet[774]: I1101 10:37:01.152292     774 scope.go:117] "RemoveContainer" containerID="c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344"
	Nov 01 10:37:01 embed-certs-618070 kubelet[774]: E1101 10:37:01.153064     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:37:15 embed-certs-618070 kubelet[774]: I1101 10:37:15.536199     774 scope.go:117] "RemoveContainer" containerID="c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344"
	Nov 01 10:37:15 embed-certs-618070 kubelet[774]: I1101 10:37:15.754877     774 scope.go:117] "RemoveContainer" containerID="c97879b886e6d58b5655d361d6f79c1b4c5560e556fb8e00fe5e161d33304344"
	Nov 01 10:37:15 embed-certs-618070 kubelet[774]: I1101 10:37:15.755188     774 scope.go:117] "RemoveContainer" containerID="1dc34fc298773848f5e6db5f9c2638ed705f08a90dea9b703b2d5fce5b2d9be9"
	Nov 01 10:37:15 embed-certs-618070 kubelet[774]: E1101 10:37:15.755332     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:37:15 embed-certs-618070 kubelet[774]: I1101 10:37:15.808490     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h8dsr" podStartSLOduration=16.234466183 podStartE2EDuration="25.808471818s" podCreationTimestamp="2025-11-01 10:36:50 +0000 UTC" firstStartedPulling="2025-11-01 10:36:51.205214534 +0000 UTC m=+11.977228209" lastFinishedPulling="2025-11-01 10:37:00.779220178 +0000 UTC m=+21.551233844" observedRunningTime="2025-11-01 10:37:01.726097672 +0000 UTC m=+22.498111347" watchObservedRunningTime="2025-11-01 10:37:15.808471818 +0000 UTC m=+36.580485559"
	Nov 01 10:37:18 embed-certs-618070 kubelet[774]: I1101 10:37:18.765663     774 scope.go:117] "RemoveContainer" containerID="d7ad380eee52f1fa60c6c143c18da47989d61aaba821322c0187925c8fde79af"
	Nov 01 10:37:21 embed-certs-618070 kubelet[774]: I1101 10:37:21.152597     774 scope.go:117] "RemoveContainer" containerID="1dc34fc298773848f5e6db5f9c2638ed705f08a90dea9b703b2d5fce5b2d9be9"
	Nov 01 10:37:21 embed-certs-618070 kubelet[774]: E1101 10:37:21.152795     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:37:34 embed-certs-618070 kubelet[774]: I1101 10:37:34.536557     774 scope.go:117] "RemoveContainer" containerID="1dc34fc298773848f5e6db5f9c2638ed705f08a90dea9b703b2d5fce5b2d9be9"
	Nov 01 10:37:34 embed-certs-618070 kubelet[774]: E1101 10:37:34.536764     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r8sdt_kubernetes-dashboard(36fad9a6-56d4-47f6-b258-fdc01bc261b1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r8sdt" podUID="36fad9a6-56d4-47f6-b258-fdc01bc261b1"
	Nov 01 10:37:44 embed-certs-618070 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:37:44 embed-certs-618070 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:37:44 embed-certs-618070 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ea77073cf682212cbdff314bf42c52e6de94d41c312dd4240d84ecac9abeb1b9] <==
	2025/11/01 10:37:00 Using namespace: kubernetes-dashboard
	2025/11/01 10:37:00 Using in-cluster config to connect to apiserver
	2025/11/01 10:37:00 Using secret token for csrf signing
	2025/11/01 10:37:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:37:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:37:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:37:00 Generating JWE encryption key
	2025/11/01 10:37:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:37:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:37:01 Initializing JWE encryption key from synchronized object
	2025/11/01 10:37:01 Creating in-cluster Sidecar client
	2025/11/01 10:37:01 Serving insecurely on HTTP port: 9090
	2025/11/01 10:37:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:37:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:37:00 Starting overwatch
	
	
	==> storage-provisioner [8c65bde628c4d367e27643df58d39498755aa17b7bf49347a236898c9814c8c0] <==
	W1101 10:37:26.584506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:30.183120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:33.236491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:36.259564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:36.266000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:37:36.266149       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:37:36.266321       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-618070_0fe5a5f0-7d4c-4d4d-b577-c3009a21fd5d!
	I1101 10:37:36.267220       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba9fae06-5ee5-464b-964a-84fa8bc80eb0", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-618070_0fe5a5f0-7d4c-4d4d-b577-c3009a21fd5d became leader
	W1101 10:37:36.281049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:36.286086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:37:36.366719       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-618070_0fe5a5f0-7d4c-4d4d-b577-c3009a21fd5d!
	W1101 10:37:38.289401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:38.295154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:40.300152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:40.308262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:42.312648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:42.321362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:44.325278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:44.335047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:46.342917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:46.348307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:48.351160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:48.356533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:50.363054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:37:50.372166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d7ad380eee52f1fa60c6c143c18da47989d61aaba821322c0187925c8fde79af] <==
	I1101 10:36:48.281577       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:37:18.284056       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-618070 -n embed-certs-618070
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-618070 -n embed-certs-618070: exit status 2 (517.125973ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-618070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-761749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-761749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (290.689742ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:38:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-761749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-761749
helpers_test.go:243: (dbg) docker inspect newest-cni-761749:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e",
	        "Created": "2025-11-01T10:38:01.36860666Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 481550,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:38:01.432409822Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/hosts",
	        "LogPath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e-json.log",
	        "Name": "/newest-cni-761749",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-761749:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-761749",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e",
	                "LowerDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-761749",
	                "Source": "/var/lib/docker/volumes/newest-cni-761749/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-761749",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-761749",
	                "name.minikube.sigs.k8s.io": "newest-cni-761749",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5e8346cfa5a9ef75bbd81b3a94a89f5142b930422a569fcd485da8424bafb2f",
	            "SandboxKey": "/var/run/docker/netns/b5e8346cfa5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-761749": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:28:b9:52:09:39",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5c14f0066ec7c912b0be843273782822de5f27a5f2c689449899d5fe3a845a2",
	                    "EndpointID": "47d10e8d5f6c7ea657ce6c66cf03f4f9d500eb6cb757db64f3e40f7f914e94cc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-761749",
	                        "b0ea1613e7b9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-761749 -n newest-cni-761749
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-761749 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-761749 logs -n 25: (1.088040053s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ delete  │ -p old-k8s-version-180313                                                                                                                                                                                                                     │ old-k8s-version-180313       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:35 UTC │
	│ delete  │ -p cert-expiration-459318                                                                                                                                                                                                                     │ cert-expiration-459318       │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:34 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-170467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-170467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ stop    │ -p embed-certs-618070 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-618070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ image   │ no-preload-170467 image list --format=json                                                                                                                                                                                                    │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p no-preload-170467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p disable-driver-mounts-416512                                                                                                                                                                                                               │ disable-driver-mounts-416512 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ image   │ embed-certs-618070 image list --format=json                                                                                                                                                                                                   │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-618070 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p newest-cni-761749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:37:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:37:55.073757  481081 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:37:55.073905  481081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:37:55.073917  481081 out.go:374] Setting ErrFile to fd 2...
	I1101 10:37:55.073921  481081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:37:55.074187  481081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:37:55.074628  481081 out.go:368] Setting JSON to false
	I1101 10:37:55.075585  481081 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8424,"bootTime":1761985051,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:37:55.075660  481081 start.go:143] virtualization:  
	I1101 10:37:55.079816  481081 out.go:179] * [newest-cni-761749] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:37:55.084273  481081 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:37:55.084307  481081 notify.go:221] Checking for updates...
	I1101 10:37:55.090860  481081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:37:55.094010  481081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:37:55.097190  481081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:37:55.100242  481081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:37:55.103331  481081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:37:55.106887  481081 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:37:55.107039  481081 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:37:55.177787  481081 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:37:55.177927  481081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:37:55.297587  481081 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:37:55.286131894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:37:55.297686  481081 docker.go:319] overlay module found
	I1101 10:37:55.300905  481081 out.go:179] * Using the docker driver based on user configuration
	I1101 10:37:55.303802  481081 start.go:309] selected driver: docker
	I1101 10:37:55.303819  481081 start.go:930] validating driver "docker" against <nil>
	I1101 10:37:55.303843  481081 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:37:55.304534  481081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:37:55.415018  481081 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:37:55.402409815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:37:55.415183  481081 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 10:37:55.415216  481081 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 10:37:55.415443  481081 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:37:55.418898  481081 out.go:179] * Using Docker driver with root privileges
	I1101 10:37:55.421836  481081 cni.go:84] Creating CNI manager for ""
	I1101 10:37:55.421908  481081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:37:55.421920  481081 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:37:55.421994  481081 start.go:353] cluster config:
	{Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:37:55.425123  481081 out.go:179] * Starting "newest-cni-761749" primary control-plane node in "newest-cni-761749" cluster
	I1101 10:37:55.427933  481081 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:37:55.430925  481081 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:37:55.433740  481081 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:37:55.433804  481081 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:37:55.433817  481081 cache.go:59] Caching tarball of preloaded images
	I1101 10:37:55.433917  481081 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:37:55.433933  481081 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:37:55.434049  481081 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/config.json ...
	I1101 10:37:55.434072  481081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/config.json: {Name:mk9d2cb17329f0e2bb5c16567cc80e1ce4dbd5ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:37:55.434229  481081 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:37:55.458641  481081 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:37:55.458664  481081 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:37:55.458676  481081 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:37:55.458698  481081 start.go:360] acquireMachinesLock for newest-cni-761749: {Name:mkbbc8f02c65f1e3740f70e3b6e44f341f2e91e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:37:55.458802  481081 start.go:364] duration metric: took 88.109µs to acquireMachinesLock for "newest-cni-761749"
	I1101 10:37:55.458825  481081 start.go:93] Provisioning new machine with config: &{Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:37:55.458892  481081 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:37:56.648004  477629 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.821897622s
	I1101 10:37:56.868224  477629 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.042620276s
	I1101 10:37:58.328212  477629 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.501847735s
	I1101 10:37:58.359592  477629 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:37:58.390782  477629 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:37:58.410334  477629 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:37:58.410862  477629 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-245904 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:37:58.429762  477629 kubeadm.go:319] [bootstrap-token] Using token: 7iytpv.hjaetbv8ma3uh5gn
	I1101 10:37:58.433156  477629 out.go:252]   - Configuring RBAC rules ...
	I1101 10:37:58.433283  477629 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:37:58.442831  477629 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:37:58.455286  477629 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:37:58.463251  477629 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:37:58.468128  477629 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:37:58.475983  477629 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:37:58.753638  477629 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:37:59.414882  477629 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:37:59.735339  477629 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:37:59.735358  477629 kubeadm.go:319] 
	I1101 10:37:59.735422  477629 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:37:59.735427  477629 kubeadm.go:319] 
	I1101 10:37:59.735508  477629 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:37:59.735513  477629 kubeadm.go:319] 
	I1101 10:37:59.735540  477629 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:37:59.735602  477629 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:37:59.735655  477629 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:37:59.735659  477629 kubeadm.go:319] 
	I1101 10:37:59.735715  477629 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:37:59.735720  477629 kubeadm.go:319] 
	I1101 10:37:59.735769  477629 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:37:59.735774  477629 kubeadm.go:319] 
	I1101 10:37:59.735828  477629 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:37:59.735906  477629 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:37:59.735978  477629 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:37:59.735982  477629 kubeadm.go:319] 
	I1101 10:37:59.736070  477629 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:37:59.736157  477629 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:37:59.736162  477629 kubeadm.go:319] 
	I1101 10:37:59.736250  477629 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 7iytpv.hjaetbv8ma3uh5gn \
	I1101 10:37:59.736357  477629 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 10:37:59.736381  477629 kubeadm.go:319] 	--control-plane 
	I1101 10:37:59.736385  477629 kubeadm.go:319] 
	I1101 10:37:59.736474  477629 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:37:59.736478  477629 kubeadm.go:319] 
	I1101 10:37:59.736564  477629 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 7iytpv.hjaetbv8ma3uh5gn \
	I1101 10:37:59.736671  477629 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 10:37:59.742391  477629 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:37:59.742638  477629 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:37:59.742759  477629 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:37:59.742780  477629 cni.go:84] Creating CNI manager for ""
	I1101 10:37:59.742789  477629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:37:59.762525  477629 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:37:55.462315  481081 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:37:55.462536  481081 start.go:159] libmachine.API.Create for "newest-cni-761749" (driver="docker")
	I1101 10:37:55.462562  481081 client.go:173] LocalClient.Create starting
	I1101 10:37:55.462641  481081 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem
	I1101 10:37:55.462679  481081 main.go:143] libmachine: Decoding PEM data...
	I1101 10:37:55.462693  481081 main.go:143] libmachine: Parsing certificate...
	I1101 10:37:55.462749  481081 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem
	I1101 10:37:55.462767  481081 main.go:143] libmachine: Decoding PEM data...
	I1101 10:37:55.462780  481081 main.go:143] libmachine: Parsing certificate...
	I1101 10:37:55.463151  481081 cli_runner.go:164] Run: docker network inspect newest-cni-761749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:37:55.519348  481081 cli_runner.go:211] docker network inspect newest-cni-761749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:37:55.519426  481081 network_create.go:284] running [docker network inspect newest-cni-761749] to gather additional debugging logs...
	I1101 10:37:55.519443  481081 cli_runner.go:164] Run: docker network inspect newest-cni-761749
	W1101 10:37:55.540756  481081 cli_runner.go:211] docker network inspect newest-cni-761749 returned with exit code 1
	I1101 10:37:55.540783  481081 network_create.go:287] error running [docker network inspect newest-cni-761749]: docker network inspect newest-cni-761749: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-761749 not found
	I1101 10:37:55.540796  481081 network_create.go:289] output of [docker network inspect newest-cni-761749]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-761749 not found
	
	** /stderr **
	I1101 10:37:55.540898  481081 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:37:55.572677  481081 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b4026c1b0063 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ce:bd:30:c3:d1} reservation:<nil>}
	I1101 10:37:55.573099  481081 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e394bead07b9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:98:c6:36:ba:b7} reservation:<nil>}
	I1101 10:37:55.573325  481081 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bd8719a80444 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:75:48:52:a5:ee} reservation:<nil>}
	I1101 10:37:55.573629  481081 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ca453ec076d5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:ed:ad:f2:f8:ce} reservation:<nil>}
	I1101 10:37:55.574071  481081 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a21f00}
	I1101 10:37:55.574088  481081 network_create.go:124] attempt to create docker network newest-cni-761749 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 10:37:55.574144  481081 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-761749 newest-cni-761749
	I1101 10:37:55.658627  481081 network_create.go:108] docker network newest-cni-761749 192.168.85.0/24 created
	I1101 10:37:55.658659  481081 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-761749" container
	I1101 10:37:55.658731  481081 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:37:55.683331  481081 cli_runner.go:164] Run: docker volume create newest-cni-761749 --label name.minikube.sigs.k8s.io=newest-cni-761749 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:37:55.718529  481081 oci.go:103] Successfully created a docker volume newest-cni-761749
	I1101 10:37:55.718644  481081 cli_runner.go:164] Run: docker run --rm --name newest-cni-761749-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-761749 --entrypoint /usr/bin/test -v newest-cni-761749:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:37:56.461680  481081 oci.go:107] Successfully prepared a docker volume newest-cni-761749
	I1101 10:37:56.461765  481081 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:37:56.461785  481081 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:37:56.461869  481081 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-761749:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:37:59.777034  477629 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:37:59.785741  477629 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:37:59.785765  477629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:37:59.810373  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:38:00.498008  477629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:38:00.498118  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:00.498158  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-245904 minikube.k8s.io/updated_at=2025_11_01T10_38_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=default-k8s-diff-port-245904 minikube.k8s.io/primary=true
	I1101 10:38:00.526355  477629 ops.go:34] apiserver oom_adj: -16
	I1101 10:38:00.707125  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:01.207398  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:01.708467  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:02.207824  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:02.707674  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:03.207209  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:03.707792  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:04.207786  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:04.707402  477629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:04.954524  477629 kubeadm.go:1114] duration metric: took 4.456480451s to wait for elevateKubeSystemPrivileges
	I1101 10:38:04.954556  477629 kubeadm.go:403] duration metric: took 23.707823916s to StartCluster
	I1101 10:38:04.954572  477629 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:04.954632  477629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:04.955323  477629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:04.955539  477629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:38:04.955640  477629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:38:04.955878  477629 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:04.955922  477629 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:38:04.955985  477629 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-245904"
	I1101 10:38:04.956013  477629 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-245904"
	I1101 10:38:04.956039  477629 host.go:66] Checking if "default-k8s-diff-port-245904" exists ...
	I1101 10:38:04.956701  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:38:04.957044  477629 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-245904"
	I1101 10:38:04.957066  477629 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-245904"
	I1101 10:38:04.957361  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:38:04.961244  477629 out.go:179] * Verifying Kubernetes components...
	I1101 10:38:04.968013  477629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:04.998029  477629 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-245904"
	I1101 10:38:04.998069  477629 host.go:66] Checking if "default-k8s-diff-port-245904" exists ...
	I1101 10:38:04.998626  477629 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:38:04.999458  477629 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:38:01.252223  481081 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-761749:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.790312616s)
	I1101 10:38:01.252253  481081 kic.go:203] duration metric: took 4.790464618s to extract preloaded images to volume ...
	W1101 10:38:01.252392  481081 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:38:01.252501  481081 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:38:01.348589  481081 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-761749 --name newest-cni-761749 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-761749 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-761749 --network newest-cni-761749 --ip 192.168.85.2 --volume newest-cni-761749:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:38:01.670614  481081 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Running}}
	I1101 10:38:01.692995  481081 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:01.723928  481081 cli_runner.go:164] Run: docker exec newest-cni-761749 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:38:01.782695  481081 oci.go:144] the created container "newest-cni-761749" has a running status.
	I1101 10:38:01.782729  481081 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa...
	I1101 10:38:02.316491  481081 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:38:02.339614  481081 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:02.364637  481081 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:38:02.364663  481081 kic_runner.go:114] Args: [docker exec --privileged newest-cni-761749 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:38:02.455281  481081 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:02.473114  481081 machine.go:94] provisionDockerMachine start ...
	I1101 10:38:02.473211  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:02.497943  481081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:02.498273  481081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1101 10:38:02.498289  481081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:38:02.498962  481081 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39342->127.0.0.1:33445: read: connection reset by peer
	I1101 10:38:05.002565  477629 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:38:05.002598  477629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:38:05.002681  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:38:05.042601  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:38:05.047000  477629 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:38:05.047022  477629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:38:05.047083  477629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:38:05.074237  477629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:38:05.273343  477629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:38:05.291561  477629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:38:05.332871  477629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:38:05.429832  477629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:38:05.983548  477629 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 10:38:06.228171  477629 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-245904" to be "Ready" ...
	I1101 10:38:06.239720  477629 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:38:05.657818  481081 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-761749
	
	I1101 10:38:05.657845  481081 ubuntu.go:182] provisioning hostname "newest-cni-761749"
	I1101 10:38:05.657916  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:05.709095  481081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:05.709506  481081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1101 10:38:05.709518  481081 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-761749 && echo "newest-cni-761749" | sudo tee /etc/hostname
	I1101 10:38:05.917882  481081 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-761749
	
	I1101 10:38:05.917980  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:05.939653  481081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:05.939962  481081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1101 10:38:05.939979  481081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-761749' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-761749/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-761749' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:38:06.118114  481081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:38:06.118191  481081 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:38:06.118227  481081 ubuntu.go:190] setting up certificates
	I1101 10:38:06.118268  481081 provision.go:84] configureAuth start
	I1101 10:38:06.118363  481081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:06.150975  481081 provision.go:143] copyHostCerts
	I1101 10:38:06.151047  481081 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:38:06.151056  481081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:38:06.151136  481081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:38:06.151231  481081 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:38:06.151236  481081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:38:06.151263  481081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:38:06.151314  481081 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:38:06.151318  481081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:38:06.151342  481081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:38:06.151393  481081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.newest-cni-761749 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-761749]
	I1101 10:38:06.887250  481081 provision.go:177] copyRemoteCerts
	I1101 10:38:06.887321  481081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:38:06.887364  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:06.906550  481081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:07.013801  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:38:07.033528  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:38:07.051114  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:38:07.069659  481081 provision.go:87] duration metric: took 951.351655ms to configureAuth
	I1101 10:38:07.069820  481081 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:38:07.070028  481081 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:07.070137  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:07.088376  481081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:07.088691  481081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1101 10:38:07.088712  481081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:38:07.354824  481081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:38:07.354848  481081 machine.go:97] duration metric: took 4.881708175s to provisionDockerMachine
	I1101 10:38:07.354858  481081 client.go:176] duration metric: took 11.892290352s to LocalClient.Create
	I1101 10:38:07.354899  481081 start.go:167] duration metric: took 11.892350448s to libmachine.API.Create "newest-cni-761749"
	I1101 10:38:07.354913  481081 start.go:293] postStartSetup for "newest-cni-761749" (driver="docker")
	I1101 10:38:07.355170  481081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:38:07.358943  481081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:38:07.359023  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:07.379145  481081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:07.486217  481081 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:38:07.489685  481081 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:38:07.489736  481081 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:38:07.489748  481081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:38:07.489806  481081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:38:07.489900  481081 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:38:07.490006  481081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:38:07.498400  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:38:07.517744  481081 start.go:296] duration metric: took 162.814956ms for postStartSetup
	I1101 10:38:07.518122  481081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:07.535949  481081 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/config.json ...
	I1101 10:38:07.536246  481081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:38:07.536294  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:07.554526  481081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:07.662782  481081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:38:07.667818  481081 start.go:128] duration metric: took 12.208912841s to createHost
	I1101 10:38:07.667843  481081 start.go:83] releasing machines lock for "newest-cni-761749", held for 12.209033336s
	I1101 10:38:07.667911  481081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:07.684573  481081 ssh_runner.go:195] Run: cat /version.json
	I1101 10:38:07.684617  481081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:38:07.684625  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:07.684685  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:07.703896  481081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:07.711553  481081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:07.818286  481081 ssh_runner.go:195] Run: systemctl --version
	I1101 10:38:07.909840  481081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:38:07.945788  481081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:38:07.950230  481081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:38:07.950310  481081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:38:07.979101  481081 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:38:07.979131  481081 start.go:496] detecting cgroup driver to use...
	I1101 10:38:07.979161  481081 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:38:07.979208  481081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:38:07.997370  481081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:38:08.011784  481081 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:38:08.011851  481081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:38:08.032442  481081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:38:08.051608  481081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:38:08.178431  481081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:38:08.327046  481081 docker.go:234] disabling docker service ...
	I1101 10:38:08.327117  481081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:38:08.349419  481081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:38:08.363422  481081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:38:08.485227  481081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:38:08.615375  481081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:38:08.630273  481081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:38:08.645551  481081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:38:08.645682  481081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:08.654730  481081 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:38:08.654883  481081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:08.664287  481081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:08.674566  481081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:08.683647  481081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:38:08.693000  481081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:08.702367  481081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:08.718938  481081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:08.728170  481081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:38:08.737954  481081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:38:08.745873  481081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:08.856124  481081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:38:08.992747  481081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:38:08.992866  481081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:38:08.998705  481081 start.go:564] Will wait 60s for crictl version
	I1101 10:38:08.998800  481081 ssh_runner.go:195] Run: which crictl
	I1101 10:38:09.005350  481081 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:38:09.038436  481081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:38:09.038558  481081 ssh_runner.go:195] Run: crio --version
	I1101 10:38:09.068835  481081 ssh_runner.go:195] Run: crio --version
	I1101 10:38:09.104257  481081 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:38:09.107100  481081 cli_runner.go:164] Run: docker network inspect newest-cni-761749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:38:09.127024  481081 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:38:09.131650  481081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:38:09.145206  481081 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:38:09.147998  481081 kubeadm.go:884] updating cluster {Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:38:09.148139  481081 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:38:09.148218  481081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:38:09.185392  481081 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:38:09.185416  481081 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:38:09.185476  481081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:38:09.211206  481081 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:38:09.211228  481081 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:38:09.211236  481081 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:38:09.211322  481081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-761749 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:38:09.211414  481081 ssh_runner.go:195] Run: crio config
	I1101 10:38:09.270062  481081 cni.go:84] Creating CNI manager for ""
	I1101 10:38:09.270088  481081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:38:09.270101  481081 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:38:09.270128  481081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-761749 NodeName:newest-cni-761749 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:38:09.270248  481081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-761749"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:38:09.270311  481081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:38:09.278705  481081 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:38:09.278774  481081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:38:09.286276  481081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:38:09.309474  481081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:38:09.322251  481081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 10:38:09.336301  481081 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:38:09.339803  481081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:38:09.349362  481081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:09.468420  481081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:38:09.487646  481081 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749 for IP: 192.168.85.2
	I1101 10:38:09.487722  481081 certs.go:195] generating shared ca certs ...
	I1101 10:38:09.487753  481081 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:09.487950  481081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:38:09.488016  481081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:38:09.488057  481081 certs.go:257] generating profile certs ...
	I1101 10:38:09.488158  481081 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/client.key
	I1101 10:38:09.488203  481081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/client.crt with IP's: []
	I1101 10:38:09.910325  481081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/client.crt ...
	I1101 10:38:09.910356  481081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/client.crt: {Name:mk7533525d57cc3a7df1301d8089c81c9c1c0422 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:09.910599  481081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/client.key ...
	I1101 10:38:09.910615  481081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/client.key: {Name:mkabc157f88236a7b502ddc7262e344e615d2b92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:09.910728  481081 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key.6f5a246d
	I1101 10:38:09.910744  481081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.crt.6f5a246d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:38:06.242546  477629 addons.go:515] duration metric: took 1.286598865s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:38:06.488134  477629 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-245904" context rescaled to 1 replicas
	W1101 10:38:08.231585  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:11.148416  481081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.crt.6f5a246d ...
	I1101 10:38:11.148450  481081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.crt.6f5a246d: {Name:mk577efe468ec30020cf4f2a1de455592c820887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:11.148646  481081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key.6f5a246d ...
	I1101 10:38:11.148664  481081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key.6f5a246d: {Name:mka20e27abae514488f8281a886b6a401ad639a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:11.148762  481081 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.crt.6f5a246d -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.crt
	I1101 10:38:11.148851  481081 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key.6f5a246d -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key
	I1101 10:38:11.148928  481081 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.key
	I1101 10:38:11.148949  481081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.crt with IP's: []
	I1101 10:38:11.592740  481081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.crt ...
	I1101 10:38:11.592771  481081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.crt: {Name:mk42ca95248a117cde27bfb503aea5db0ba14385 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:11.592959  481081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.key ...
	I1101 10:38:11.592975  481081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.key: {Name:mk17d4a3ca02a781205d32333669bd92d3bb890e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:11.593168  481081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:38:11.593210  481081 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:38:11.593224  481081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:38:11.593258  481081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:38:11.593287  481081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:38:11.593309  481081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:38:11.593351  481081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:38:11.593940  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:38:11.611846  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:38:11.630781  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:38:11.650095  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:38:11.668812  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:38:11.688826  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:38:11.708205  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:38:11.726414  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:38:11.745786  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:38:11.763874  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:38:11.781859  481081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:38:11.801662  481081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:38:11.815713  481081 ssh_runner.go:195] Run: openssl version
	I1101 10:38:11.822553  481081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:38:11.831143  481081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:11.835320  481081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:11.835386  481081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:11.876435  481081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:38:11.885184  481081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:38:11.894254  481081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:38:11.898395  481081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:38:11.898457  481081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:38:11.946056  481081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:38:11.954863  481081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:38:11.966989  481081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:38:11.972288  481081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:38:11.972369  481081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:38:12.014351  481081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:38:12.023983  481081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:38:12.028420  481081 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:38:12.028522  481081 kubeadm.go:401] StartCluster: {Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:38:12.028602  481081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:38:12.028660  481081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:38:12.057661  481081 cri.go:89] found id: ""
	I1101 10:38:12.057779  481081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:38:12.066189  481081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:38:12.074340  481081 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:38:12.074436  481081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:38:12.082924  481081 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:38:12.082945  481081 kubeadm.go:158] found existing configuration files:
	
	I1101 10:38:12.083017  481081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:38:12.090847  481081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:38:12.090914  481081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:38:12.099124  481081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:38:12.107115  481081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:38:12.107227  481081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:38:12.114770  481081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:38:12.122637  481081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:38:12.122730  481081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:38:12.130261  481081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:38:12.138122  481081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:38:12.138185  481081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:38:12.145599  481081 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:38:12.188378  481081 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:38:12.188452  481081 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:38:12.215473  481081 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:38:12.215551  481081 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:38:12.215591  481081 kubeadm.go:319] OS: Linux
	I1101 10:38:12.215643  481081 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:38:12.215697  481081 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:38:12.215751  481081 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:38:12.215806  481081 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:38:12.215860  481081 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:38:12.215914  481081 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:38:12.215965  481081 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:38:12.216019  481081 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:38:12.216071  481081 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:38:12.300591  481081 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:38:12.300741  481081 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:38:12.300905  481081 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:38:12.309555  481081 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:38:12.315482  481081 out.go:252]   - Generating certificates and keys ...
	I1101 10:38:12.315596  481081 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:38:12.315685  481081 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:38:12.619476  481081 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:38:12.693974  481081 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:38:12.943863  481081 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:38:13.189535  481081 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:38:13.989429  481081 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:38:13.989836  481081 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-761749] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:38:14.329979  481081 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:38:14.330674  481081 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-761749] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:38:14.973982  481081 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	W1101 10:38:10.732083  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:13.232993  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:15.183870  481081 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:38:15.488115  481081 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:38:15.488532  481081 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:38:16.189638  481081 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:38:17.414165  481081 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:38:18.096142  481081 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:38:19.616512  481081 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:38:19.714998  481081 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:38:19.716016  481081 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:38:19.719183  481081 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:38:19.722947  481081 out.go:252]   - Booting up control plane ...
	I1101 10:38:19.723069  481081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:38:19.723178  481081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:38:19.724300  481081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:38:19.743203  481081 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:38:19.743322  481081 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:38:19.754855  481081 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:38:19.755565  481081 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:38:19.755730  481081 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:38:19.891218  481081 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:38:19.891351  481081 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1101 10:38:15.733017  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:18.232425  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:20.233106  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:21.394029  481081 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501693217s
	I1101 10:38:21.396065  481081 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:38:21.396157  481081 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:38:21.396450  481081 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:38:21.396538  481081 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 10:38:22.730625  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:24.731455  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:25.328992  481081 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.932462113s
	I1101 10:38:26.191697  481081 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.795499687s
	I1101 10:38:27.397555  481081 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001289925s
	I1101 10:38:27.418233  481081 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:38:27.434674  481081 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:38:27.452808  481081 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:38:27.453022  481081 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-761749 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:38:27.465936  481081 kubeadm.go:319] [bootstrap-token] Using token: q1tfkz.paoa7u9xweriuw3g
	I1101 10:38:27.471152  481081 out.go:252]   - Configuring RBAC rules ...
	I1101 10:38:27.471292  481081 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:38:27.473231  481081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:38:27.483248  481081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:38:27.487286  481081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:38:27.493935  481081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:38:27.498553  481081 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:38:27.804239  481081 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:38:28.241443  481081 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:38:28.804427  481081 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:38:28.805597  481081 kubeadm.go:319] 
	I1101 10:38:28.805683  481081 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:38:28.805719  481081 kubeadm.go:319] 
	I1101 10:38:28.805801  481081 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:38:28.805817  481081 kubeadm.go:319] 
	I1101 10:38:28.805844  481081 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:38:28.805919  481081 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:38:28.805975  481081 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:38:28.805984  481081 kubeadm.go:319] 
	I1101 10:38:28.806041  481081 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:38:28.806051  481081 kubeadm.go:319] 
	I1101 10:38:28.806101  481081 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:38:28.806108  481081 kubeadm.go:319] 
	I1101 10:38:28.806163  481081 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:38:28.806246  481081 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:38:28.806321  481081 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:38:28.806329  481081 kubeadm.go:319] 
	I1101 10:38:28.806424  481081 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:38:28.806509  481081 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:38:28.806517  481081 kubeadm.go:319] 
	I1101 10:38:28.806605  481081 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token q1tfkz.paoa7u9xweriuw3g \
	I1101 10:38:28.806715  481081 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 10:38:28.806741  481081 kubeadm.go:319] 	--control-plane 
	I1101 10:38:28.806752  481081 kubeadm.go:319] 
	I1101 10:38:28.806847  481081 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:38:28.806859  481081 kubeadm.go:319] 
	I1101 10:38:28.806945  481081 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token q1tfkz.paoa7u9xweriuw3g \
	I1101 10:38:28.807055  481081 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 10:38:28.812151  481081 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:38:28.812397  481081 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:38:28.812528  481081 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:38:28.812552  481081 cni.go:84] Creating CNI manager for ""
	I1101 10:38:28.812565  481081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:38:28.815850  481081 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:38:28.818757  481081 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:38:28.823081  481081 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:38:28.823104  481081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:38:28.840185  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:38:29.147692  481081 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:38:29.147858  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:29.147945  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-761749 minikube.k8s.io/updated_at=2025_11_01T10_38_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=newest-cni-761749 minikube.k8s.io/primary=true
	I1101 10:38:29.167119  481081 ops.go:34] apiserver oom_adj: -16
	I1101 10:38:29.306789  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:29.807188  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1101 10:38:27.231302  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:29.231415  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:30.307395  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:30.806919  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:31.307572  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:31.806968  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:32.307172  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:32.807663  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:33.307414  481081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:38:33.411009  481081 kubeadm.go:1114] duration metric: took 4.263202658s to wait for elevateKubeSystemPrivileges
	I1101 10:38:33.411048  481081 kubeadm.go:403] duration metric: took 21.382530369s to StartCluster
	I1101 10:38:33.411065  481081 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:33.411126  481081 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:33.412139  481081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:33.412354  481081 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:38:33.412461  481081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:38:33.412688  481081 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:33.412718  481081 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:38:33.412776  481081 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-761749"
	I1101 10:38:33.412792  481081 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-761749"
	I1101 10:38:33.412813  481081 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:33.413568  481081 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:33.413825  481081 addons.go:70] Setting default-storageclass=true in profile "newest-cni-761749"
	I1101 10:38:33.413850  481081 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-761749"
	I1101 10:38:33.414131  481081 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:33.417673  481081 out.go:179] * Verifying Kubernetes components...
	I1101 10:38:33.422123  481081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:33.445860  481081 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:38:33.449367  481081 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:38:33.449388  481081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:38:33.449452  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:33.461509  481081 addons.go:239] Setting addon default-storageclass=true in "newest-cni-761749"
	I1101 10:38:33.461547  481081 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:33.462050  481081 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:33.490796  481081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:33.500610  481081 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:38:33.500632  481081 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:38:33.500693  481081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:33.528019  481081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:33.750198  481081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:38:33.825589  481081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:38:33.830261  481081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:38:33.830456  481081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:38:34.787728  481081 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:38:34.790214  481081 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:38:34.790324  481081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:38:34.792630  481081 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 10:38:34.795025  481081 addons.go:515] duration metric: took 1.382293563s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 10:38:34.807601  481081 api_server.go:72] duration metric: took 1.395218062s to wait for apiserver process to appear ...
	I1101 10:38:34.807622  481081 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:38:34.807640  481081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:38:34.816487  481081 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:38:34.822272  481081 api_server.go:141] control plane version: v1.34.1
	I1101 10:38:34.822348  481081 api_server.go:131] duration metric: took 14.719018ms to wait for apiserver health ...
	I1101 10:38:34.822376  481081 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:38:34.830276  481081 system_pods.go:59] 9 kube-system pods found
	I1101 10:38:34.830365  481081 system_pods.go:61] "coredns-66bc5c9577-dkmh7" [4ba29de7-db66-4fb3-a494-f65c332a18fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:38:34.830387  481081 system_pods.go:61] "coredns-66bc5c9577-splg4" [ce8663e2-d2cb-495d-8100-0a14b5c3c8e3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:38:34.830425  481081 system_pods.go:61] "etcd-newest-cni-761749" [01442f80-7894-4906-bcf2-310262858f81] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:38:34.830450  481081 system_pods.go:61] "kindnet-kj78v" [9e32b217-03e3-4606-a267-3a45809b6648] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:38:34.830471  481081 system_pods.go:61] "kube-apiserver-newest-cni-761749" [11f59f30-302f-4408-8088-f1ad8a9151d3] Running
	I1101 10:38:34.830507  481081 system_pods.go:61] "kube-controller-manager-newest-cni-761749" [45778566-a6e7-4161-b5e3-ac477859613d] Running
	I1101 10:38:34.830538  481081 system_pods.go:61] "kube-proxy-fzkf5" [865ae218-f581-4914-b55c-fdf4d5134c58] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:38:34.830567  481081 system_pods.go:61] "kube-scheduler-newest-cni-761749" [cc737524-4ed5-438e-bc67-e23969166ef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:38:34.830608  481081 system_pods.go:61] "storage-provisioner" [33de256b-6331-467e-96be-298d220b8aa8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:38:34.830629  481081 system_pods.go:74] duration metric: took 8.232974ms to wait for pod list to return data ...
	I1101 10:38:34.830665  481081 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:38:34.836807  481081 default_sa.go:45] found service account: "default"
	I1101 10:38:34.836879  481081 default_sa.go:55] duration metric: took 6.190433ms for default service account to be created ...
	I1101 10:38:34.836909  481081 kubeadm.go:587] duration metric: took 1.42452995s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:38:34.836954  481081 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:38:34.842781  481081 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:38:34.842881  481081 node_conditions.go:123] node cpu capacity is 2
	I1101 10:38:34.842908  481081 node_conditions.go:105] duration metric: took 5.904488ms to run NodePressure ...
	I1101 10:38:34.842934  481081 start.go:242] waiting for startup goroutines ...
	I1101 10:38:35.292589  481081 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-761749" context rescaled to 1 replicas
	I1101 10:38:35.292628  481081 start.go:247] waiting for cluster config update ...
	I1101 10:38:35.292641  481081 start.go:256] writing updated cluster config ...
	I1101 10:38:35.292933  481081 ssh_runner.go:195] Run: rm -f paused
	I1101 10:38:35.372125  481081 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:38:35.375303  481081 out.go:179] * Done! kubectl is now configured to use "newest-cni-761749" cluster and "default" namespace by default
	W1101 10:38:31.730933  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:33.731398  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.186802498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.193752026Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ae6fd1f2-341e-4e68-aa98-b76f15249dd4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.202286749Z" level=info msg="Ran pod sandbox 27eb62370140949b87ec136c2ca68de2b5075d66d6ec400323383d07ad450ba8 with infra container: kube-system/kube-proxy-fzkf5/POD" id=ae6fd1f2-341e-4e68-aa98-b76f15249dd4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.204168761Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b41c46c4-f3b3-4f4e-926d-f78b5a2185cf name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.206319546Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=71d2d850-2c73-4c50-a550-d8b6a3649ecd name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.21429704Z" level=info msg="Running pod sandbox: kube-system/kindnet-kj78v/POD" id=0a7f22b6-0770-4ab6-b3a8-f6878ab03d31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.214568618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.221457148Z" level=info msg="Creating container: kube-system/kube-proxy-fzkf5/kube-proxy" id=bca8120d-ac9e-45d6-b5c2-951d7500bdb1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.224396264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.227279386Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0a7f22b6-0770-4ab6-b3a8-f6878ab03d31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.243688339Z" level=info msg="Ran pod sandbox 677b589f0c7111310bb5c881eedd4e9df31779bef22c5fd3ac0057671da69728 with infra container: kube-system/kindnet-kj78v/POD" id=0a7f22b6-0770-4ab6-b3a8-f6878ab03d31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.246332835Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=360f2654-b982-4664-be9b-4d4e88e9e3c2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.249067508Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=dd7b3c68-1a06-4867-929c-c2cef60ccdc8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.262305801Z" level=info msg="Creating container: kube-system/kindnet-kj78v/kindnet-cni" id=a43eadb7-64f0-4421-b5bd-a2d7017428b3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.262662058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.270871589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.2729262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.284362835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.285357897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.334845226Z" level=info msg="Created container 51f31f314efd3cf6c3a915511ac593e5acd2dc20e366117eee5639291d0f6ec5: kube-system/kindnet-kj78v/kindnet-cni" id=a43eadb7-64f0-4421-b5bd-a2d7017428b3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.337289021Z" level=info msg="Starting container: 51f31f314efd3cf6c3a915511ac593e5acd2dc20e366117eee5639291d0f6ec5" id=2c21e45d-0e44-413a-b32c-4075d6608809 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.347896709Z" level=info msg="Started container" PID=1496 containerID=51f31f314efd3cf6c3a915511ac593e5acd2dc20e366117eee5639291d0f6ec5 description=kube-system/kindnet-kj78v/kindnet-cni id=2c21e45d-0e44-413a-b32c-4075d6608809 name=/runtime.v1.RuntimeService/StartContainer sandboxID=677b589f0c7111310bb5c881eedd4e9df31779bef22c5fd3ac0057671da69728
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.390160814Z" level=info msg="Created container 0c472ddb01c871d98f2b91816d49d4d02d3b9581d705c45d201c67cb5262f078: kube-system/kube-proxy-fzkf5/kube-proxy" id=bca8120d-ac9e-45d6-b5c2-951d7500bdb1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.394540235Z" level=info msg="Starting container: 0c472ddb01c871d98f2b91816d49d4d02d3b9581d705c45d201c67cb5262f078" id=548b8827-1bfb-49db-9d04-de2a85e48ebb name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:38:34 newest-cni-761749 crio[837]: time="2025-11-01T10:38:34.400407003Z" level=info msg="Started container" PID=1500 containerID=0c472ddb01c871d98f2b91816d49d4d02d3b9581d705c45d201c67cb5262f078 description=kube-system/kube-proxy-fzkf5/kube-proxy id=548b8827-1bfb-49db-9d04-de2a85e48ebb name=/runtime.v1.RuntimeService/StartContainer sandboxID=27eb62370140949b87ec136c2ca68de2b5075d66d6ec400323383d07ad450ba8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0c472ddb01c87       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   27eb623701409       kube-proxy-fzkf5                            kube-system
	51f31f314efd3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   677b589f0c711       kindnet-kj78v                               kube-system
	7d8708b53dfc9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   0aedd92abc7b6       etcd-newest-cni-761749                      kube-system
	1952e6e5998fa       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   e1bddf60f9cf2       kube-scheduler-newest-cni-761749            kube-system
	a282905f0ccb0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   dcb60f480bd4f       kube-controller-manager-newest-cni-761749   kube-system
	b7db5f6902a4b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   9b18c09e3fd69       kube-apiserver-newest-cni-761749            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-761749
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-761749
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=newest-cni-761749
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_38_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:38:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-761749
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:38:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:38:28 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:38:28 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:38:28 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:38:28 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-761749
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d919014c-b008-45f7-b1e1-0de245f57299
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-761749                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-kj78v                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-761749             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-761749    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-fzkf5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-761749             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 2s    kube-proxy       
	  Normal   Starting                 8s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s    kubelet          Node newest-cni-761749 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-761749 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s    kubelet          Node newest-cni-761749 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s    node-controller  Node newest-cni-761749 event: Registered Node newest-cni-761749 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:17] overlayfs: idmapped layers are currently not supported
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7d8708b53dfc9b296f94152ae1a5cc56e8c685fae75a121e372edc8f37e76321] <==
	{"level":"warn","ts":"2025-11-01T10:38:23.713350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.754508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.759710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.766070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.785580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.808500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.827702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.842935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.856215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.921944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.928776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:23.960202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.049861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.058503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.099516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.158243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.160150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.177863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.218597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.258264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.319861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.339091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.352279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.373310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:24.465645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47576","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:38:36 up  2:21,  0 user,  load average: 4.07, 4.11, 3.29
	Linux newest-cni-761749 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [51f31f314efd3cf6c3a915511ac593e5acd2dc20e366117eee5639291d0f6ec5] <==
	I1101 10:38:34.438157       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:38:34.438451       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:38:34.438570       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:38:34.438581       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:38:34.438593       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:38:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:38:34.719141       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:38:34.720266       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:38:34.720341       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:38:34.721194       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [b7db5f6902a4b3bb04c9d855f757434d447627d15650e921456b7659de2bcb7a] <==
	I1101 10:38:25.532824       1 policy_source.go:240] refreshing policies
	I1101 10:38:25.551710       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:38:25.592865       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:38:25.669769       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:38:25.670444       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:38:25.670532       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:38:25.692231       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:38:25.694101       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:38:26.320692       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:38:26.327614       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:38:26.327636       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:38:27.032753       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:38:27.096572       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:38:27.248803       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:38:27.257393       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 10:38:27.258549       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:38:27.263691       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:38:27.632193       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:38:28.219630       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:38:28.240484       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:38:28.254918       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:38:32.986278       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:38:32.994507       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:38:33.338405       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:38:33.770586       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a282905f0ccb096fd11e477956a4281a4ba47c7f2d64dd58078c198135901180] <==
	I1101 10:38:32.661455       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:38:32.665820       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:38:32.672025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:38:32.675552       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:38:32.675581       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:38:32.675589       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:38:32.675920       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:38:32.677857       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:38:32.678155       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:38:32.679468       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-761749"
	I1101 10:38:32.678349       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:38:32.679265       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:38:32.682011       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:38:32.679328       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:38:32.682151       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:38:32.679281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:38:32.679297       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:38:32.679338       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:38:32.679348       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:38:32.679356       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:38:32.679366       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:38:32.679375       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:38:32.679381       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:38:32.683988       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:38:32.689196       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [0c472ddb01c871d98f2b91816d49d4d02d3b9581d705c45d201c67cb5262f078] <==
	I1101 10:38:34.464848       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:38:34.560364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:38:34.661076       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:38:34.661138       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:38:34.661225       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:38:34.684061       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:38:34.684186       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:38:34.688321       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:38:34.688941       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:38:34.689146       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:38:34.690484       1 config.go:200] "Starting service config controller"
	I1101 10:38:34.690545       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:38:34.690597       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:38:34.690627       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:38:34.690664       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:38:34.690690       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:38:34.691394       1 config.go:309] "Starting node config controller"
	I1101 10:38:34.693665       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:38:34.693755       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:38:34.790801       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:38:34.790847       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:38:34.790887       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1952e6e5998fa5bb2272a17cfa0805ad716886b8afba9c5fc4e152a3950e8633] <==
	I1101 10:38:26.180571       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:38:26.182718       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:38:26.182794       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:38:26.183142       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:38:26.183199       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 10:38:26.195808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:38:26.201554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:38:26.201855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:38:26.201900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:38:26.201936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:38:26.201973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:38:26.202026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:38:26.202068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:38:26.202103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:38:26.202170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:38:26.202201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:38:26.202252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:38:26.202315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:38:26.202350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:38:26.202948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:38:26.202997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:38:26.204123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:38:26.204176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:38:26.204208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1101 10:38:27.483533       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: I1101 10:38:29.206811    1308 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: I1101 10:38:29.340489    1308 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-761749"
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: I1101 10:38:29.340744    1308 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-761749"
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: I1101 10:38:29.340936    1308 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-761749"
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: E1101 10:38:29.361180    1308 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-761749\" already exists" pod="kube-system/kube-scheduler-newest-cni-761749"
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: E1101 10:38:29.375851    1308 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-761749\" already exists" pod="kube-system/etcd-newest-cni-761749"
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: E1101 10:38:29.381082    1308 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-761749\" already exists" pod="kube-system/kube-apiserver-newest-cni-761749"
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: I1101 10:38:29.405885    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-761749" podStartSLOduration=1.405851235 podStartE2EDuration="1.405851235s" podCreationTimestamp="2025-11-01 10:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:38:29.382350891 +0000 UTC m=+1.311838436" watchObservedRunningTime="2025-11-01 10:38:29.405851235 +0000 UTC m=+1.335338764"
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: I1101 10:38:29.424351    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-761749" podStartSLOduration=1.4243350989999999 podStartE2EDuration="1.424335099s" podCreationTimestamp="2025-11-01 10:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:38:29.406644923 +0000 UTC m=+1.336132461" watchObservedRunningTime="2025-11-01 10:38:29.424335099 +0000 UTC m=+1.353822620"
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: I1101 10:38:29.464880    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-761749" podStartSLOduration=1.464834653 podStartE2EDuration="1.464834653s" podCreationTimestamp="2025-11-01 10:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:38:29.426644886 +0000 UTC m=+1.356132407" watchObservedRunningTime="2025-11-01 10:38:29.464834653 +0000 UTC m=+1.394322190"
	Nov 01 10:38:29 newest-cni-761749 kubelet[1308]: I1101 10:38:29.492063    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-761749" podStartSLOduration=1.4920464500000001 podStartE2EDuration="1.49204645s" podCreationTimestamp="2025-11-01 10:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:38:29.466563325 +0000 UTC m=+1.396050862" watchObservedRunningTime="2025-11-01 10:38:29.49204645 +0000 UTC m=+1.421533971"
	Nov 01 10:38:32 newest-cni-761749 kubelet[1308]: I1101 10:38:32.680356    1308 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:38:32 newest-cni-761749 kubelet[1308]: I1101 10:38:32.681764    1308 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:38:33 newest-cni-761749 kubelet[1308]: I1101 10:38:33.969216    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9e32b217-03e3-4606-a267-3a45809b6648-cni-cfg\") pod \"kindnet-kj78v\" (UID: \"9e32b217-03e3-4606-a267-3a45809b6648\") " pod="kube-system/kindnet-kj78v"
	Nov 01 10:38:33 newest-cni-761749 kubelet[1308]: I1101 10:38:33.969253    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/865ae218-f581-4914-b55c-fdf4d5134c58-lib-modules\") pod \"kube-proxy-fzkf5\" (UID: \"865ae218-f581-4914-b55c-fdf4d5134c58\") " pod="kube-system/kube-proxy-fzkf5"
	Nov 01 10:38:33 newest-cni-761749 kubelet[1308]: I1101 10:38:33.969273    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e32b217-03e3-4606-a267-3a45809b6648-lib-modules\") pod \"kindnet-kj78v\" (UID: \"9e32b217-03e3-4606-a267-3a45809b6648\") " pod="kube-system/kindnet-kj78v"
	Nov 01 10:38:33 newest-cni-761749 kubelet[1308]: I1101 10:38:33.969294    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/865ae218-f581-4914-b55c-fdf4d5134c58-kube-proxy\") pod \"kube-proxy-fzkf5\" (UID: \"865ae218-f581-4914-b55c-fdf4d5134c58\") " pod="kube-system/kube-proxy-fzkf5"
	Nov 01 10:38:33 newest-cni-761749 kubelet[1308]: I1101 10:38:33.969311    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9c8c\" (UniqueName: \"kubernetes.io/projected/865ae218-f581-4914-b55c-fdf4d5134c58-kube-api-access-b9c8c\") pod \"kube-proxy-fzkf5\" (UID: \"865ae218-f581-4914-b55c-fdf4d5134c58\") " pod="kube-system/kube-proxy-fzkf5"
	Nov 01 10:38:33 newest-cni-761749 kubelet[1308]: I1101 10:38:33.969329    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e32b217-03e3-4606-a267-3a45809b6648-xtables-lock\") pod \"kindnet-kj78v\" (UID: \"9e32b217-03e3-4606-a267-3a45809b6648\") " pod="kube-system/kindnet-kj78v"
	Nov 01 10:38:33 newest-cni-761749 kubelet[1308]: I1101 10:38:33.969353    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpk8b\" (UniqueName: \"kubernetes.io/projected/9e32b217-03e3-4606-a267-3a45809b6648-kube-api-access-mpk8b\") pod \"kindnet-kj78v\" (UID: \"9e32b217-03e3-4606-a267-3a45809b6648\") " pod="kube-system/kindnet-kj78v"
	Nov 01 10:38:33 newest-cni-761749 kubelet[1308]: I1101 10:38:33.969372    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/865ae218-f581-4914-b55c-fdf4d5134c58-xtables-lock\") pod \"kube-proxy-fzkf5\" (UID: \"865ae218-f581-4914-b55c-fdf4d5134c58\") " pod="kube-system/kube-proxy-fzkf5"
	Nov 01 10:38:34 newest-cni-761749 kubelet[1308]: I1101 10:38:34.086235    1308 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:38:34 newest-cni-761749 kubelet[1308]: W1101 10:38:34.199347    1308 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/crio-27eb62370140949b87ec136c2ca68de2b5075d66d6ec400323383d07ad450ba8 WatchSource:0}: Error finding container 27eb62370140949b87ec136c2ca68de2b5075d66d6ec400323383d07ad450ba8: Status 404 returned error can't find the container with id 27eb62370140949b87ec136c2ca68de2b5075d66d6ec400323383d07ad450ba8
	Nov 01 10:38:34 newest-cni-761749 kubelet[1308]: W1101 10:38:34.240586    1308 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/crio-677b589f0c7111310bb5c881eedd4e9df31779bef22c5fd3ac0057671da69728 WatchSource:0}: Error finding container 677b589f0c7111310bb5c881eedd4e9df31779bef22c5fd3ac0057671da69728: Status 404 returned error can't find the container with id 677b589f0c7111310bb5c881eedd4e9df31779bef22c5fd3ac0057671da69728
	Nov 01 10:38:35 newest-cni-761749 kubelet[1308]: I1101 10:38:35.473815    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kj78v" podStartSLOduration=2.473794995 podStartE2EDuration="2.473794995s" podCreationTimestamp="2025-11-01 10:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:38:35.413967972 +0000 UTC m=+7.343455509" watchObservedRunningTime="2025-11-01 10:38:35.473794995 +0000 UTC m=+7.403282516"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-761749 -n newest-cni-761749
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-761749 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-dkmh7 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-761749 describe pod coredns-66bc5c9577-dkmh7 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-761749 describe pod coredns-66bc5c9577-dkmh7 storage-provisioner: exit status 1 (80.196188ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-dkmh7" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-761749 describe pod coredns-66bc5c9577-dkmh7 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-761749 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-761749 --alsologtostderr -v=1: exit status 80 (2.210649998s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-761749 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:38:55.517732  486450 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:38:55.519118  486450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:38:55.519166  486450 out.go:374] Setting ErrFile to fd 2...
	I1101 10:38:55.519188  486450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:38:55.519552  486450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:38:55.519912  486450 out.go:368] Setting JSON to false
	I1101 10:38:55.519977  486450 mustload.go:66] Loading cluster: newest-cni-761749
	I1101 10:38:55.520460  486450 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:55.521039  486450 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:55.541603  486450 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:55.542113  486450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:38:55.650428  486450 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:38:55.637073723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:38:55.651119  486450 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-761749 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:38:55.654516  486450 out.go:179] * Pausing node newest-cni-761749 ... 
	I1101 10:38:55.658326  486450 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:55.658844  486450 ssh_runner.go:195] Run: systemctl --version
	I1101 10:38:55.658939  486450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:55.693807  486450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:55.802790  486450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:38:55.828454  486450 pause.go:52] kubelet running: true
	I1101 10:38:55.828589  486450 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:38:56.149088  486450 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:38:56.149179  486450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:38:56.235974  486450 cri.go:89] found id: "7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8"
	I1101 10:38:56.236043  486450 cri.go:89] found id: "a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4"
	I1101 10:38:56.236077  486450 cri.go:89] found id: "de93a1f63a9d3c4fe900f5766c8143f4f0cfc5c264276ad60ac51ab1a84988d3"
	I1101 10:38:56.236100  486450 cri.go:89] found id: "414d6f893c68b755fc729b16f2cd8b4e936d00bdbbb7ae6fafe5a9d7fda62635"
	I1101 10:38:56.236121  486450 cri.go:89] found id: "b9f553ff342098fd441b42aa1e52310fae9a2b1952ea819220331db38af305bf"
	I1101 10:38:56.236157  486450 cri.go:89] found id: "8e311efa9f61ff9f631155480b75fb70507dd1cd49a022969169b03774e7d150"
	I1101 10:38:56.236180  486450 cri.go:89] found id: ""
	I1101 10:38:56.236258  486450 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:38:56.248995  486450 retry.go:31] will retry after 292.712748ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:38:56Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:38:56.542330  486450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:38:56.556698  486450 pause.go:52] kubelet running: false
	I1101 10:38:56.556759  486450 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:38:56.733529  486450 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:38:56.733644  486450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:38:56.807097  486450 cri.go:89] found id: "7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8"
	I1101 10:38:56.807129  486450 cri.go:89] found id: "a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4"
	I1101 10:38:56.807135  486450 cri.go:89] found id: "de93a1f63a9d3c4fe900f5766c8143f4f0cfc5c264276ad60ac51ab1a84988d3"
	I1101 10:38:56.807139  486450 cri.go:89] found id: "414d6f893c68b755fc729b16f2cd8b4e936d00bdbbb7ae6fafe5a9d7fda62635"
	I1101 10:38:56.807143  486450 cri.go:89] found id: "b9f553ff342098fd441b42aa1e52310fae9a2b1952ea819220331db38af305bf"
	I1101 10:38:56.807163  486450 cri.go:89] found id: "8e311efa9f61ff9f631155480b75fb70507dd1cd49a022969169b03774e7d150"
	I1101 10:38:56.807171  486450 cri.go:89] found id: ""
	I1101 10:38:56.807236  486450 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:38:56.818409  486450 retry.go:31] will retry after 555.249128ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:38:56Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:38:57.373918  486450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:38:57.387502  486450 pause.go:52] kubelet running: false
	I1101 10:38:57.387567  486450 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:38:57.548813  486450 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:38:57.548889  486450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:38:57.620969  486450 cri.go:89] found id: "7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8"
	I1101 10:38:57.620990  486450 cri.go:89] found id: "a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4"
	I1101 10:38:57.620994  486450 cri.go:89] found id: "de93a1f63a9d3c4fe900f5766c8143f4f0cfc5c264276ad60ac51ab1a84988d3"
	I1101 10:38:57.620998  486450 cri.go:89] found id: "414d6f893c68b755fc729b16f2cd8b4e936d00bdbbb7ae6fafe5a9d7fda62635"
	I1101 10:38:57.621002  486450 cri.go:89] found id: "b9f553ff342098fd441b42aa1e52310fae9a2b1952ea819220331db38af305bf"
	I1101 10:38:57.621007  486450 cri.go:89] found id: "8e311efa9f61ff9f631155480b75fb70507dd1cd49a022969169b03774e7d150"
	I1101 10:38:57.621010  486450 cri.go:89] found id: ""
	I1101 10:38:57.621063  486450 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:38:57.635649  486450 out.go:203] 
	W1101 10:38:57.638519  486450 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:38:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:38:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:38:57.638559  486450 out.go:285] * 
	* 
	W1101 10:38:57.645828  486450 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:38:57.648733  486450 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-761749 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-761749
helpers_test.go:243: (dbg) docker inspect newest-cni-761749:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e",
	        "Created": "2025-11-01T10:38:01.36860666Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484693,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:38:39.67860239Z",
	            "FinishedAt": "2025-11-01T10:38:38.709089471Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/hosts",
	        "LogPath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e-json.log",
	        "Name": "/newest-cni-761749",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-761749:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-761749",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e",
	                "LowerDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-761749",
	                "Source": "/var/lib/docker/volumes/newest-cni-761749/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-761749",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-761749",
	                "name.minikube.sigs.k8s.io": "newest-cni-761749",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ed1504feddf2b20dc6c25467247fa776c737289e581d25e60278d67c81a2ea1",
	            "SandboxKey": "/var/run/docker/netns/7ed1504feddf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-761749": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:3e:72:ce:41:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5c14f0066ec7c912b0be843273782822de5f27a5f2c689449899d5fe3a845a2",
	                    "EndpointID": "c3cbc1529306f73cf22c158107d6c00a5d1f610fc4f490dbaadd70d5db269086",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-761749",
	                        "b0ea1613e7b9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-761749 -n newest-cni-761749
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-761749 -n newest-cni-761749: exit status 2 (361.79672ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-761749 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-761749 logs -n 25: (1.375537975s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-170467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-170467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-170467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ stop    │ -p embed-certs-618070 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-618070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ image   │ no-preload-170467 image list --format=json                                                                                                                                                                                                    │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p no-preload-170467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p disable-driver-mounts-416512                                                                                                                                                                                                               │ disable-driver-mounts-416512 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ image   │ embed-certs-618070 image list --format=json                                                                                                                                                                                                   │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-618070 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p newest-cni-761749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ stop    │ -p newest-cni-761749 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-761749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ image   │ newest-cni-761749 image list --format=json                                                                                                                                                                                                    │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-761749 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:38:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:38:39.399270  484563 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:38:39.399466  484563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:38:39.399495  484563 out.go:374] Setting ErrFile to fd 2...
	I1101 10:38:39.399516  484563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:38:39.399817  484563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:38:39.400255  484563 out.go:368] Setting JSON to false
	I1101 10:38:39.401281  484563 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8469,"bootTime":1761985051,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:38:39.401381  484563 start.go:143] virtualization:  
	I1101 10:38:39.406465  484563 out.go:179] * [newest-cni-761749] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:38:39.409727  484563 notify.go:221] Checking for updates...
	I1101 10:38:39.410613  484563 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:38:39.413803  484563 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:38:39.416769  484563 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:39.419641  484563 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:38:39.422574  484563 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:38:39.425407  484563 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:38:39.429341  484563 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:39.429959  484563 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:38:39.463477  484563 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:38:39.463592  484563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:38:39.521776  484563 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:38:39.511848177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:38:39.521889  484563 docker.go:319] overlay module found
	I1101 10:38:39.525055  484563 out.go:179] * Using the docker driver based on existing profile
	I1101 10:38:39.527855  484563 start.go:309] selected driver: docker
	I1101 10:38:39.527878  484563 start.go:930] validating driver "docker" against &{Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:38:39.527989  484563 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:38:39.528718  484563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:38:39.590420  484563 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:38:39.581000966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:38:39.590774  484563 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:38:39.590815  484563 cni.go:84] Creating CNI manager for ""
	I1101 10:38:39.590877  484563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:38:39.590958  484563 start.go:353] cluster config:
	{Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:38:39.595980  484563 out.go:179] * Starting "newest-cni-761749" primary control-plane node in "newest-cni-761749" cluster
	I1101 10:38:39.598792  484563 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:38:39.601801  484563 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:38:39.604630  484563 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:38:39.604684  484563 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:38:39.604697  484563 cache.go:59] Caching tarball of preloaded images
	I1101 10:38:39.604726  484563 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:38:39.604800  484563 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:38:39.604811  484563 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:38:39.604967  484563 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/config.json ...
	I1101 10:38:39.623857  484563 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:38:39.623882  484563 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:38:39.623895  484563 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:38:39.623917  484563 start.go:360] acquireMachinesLock for newest-cni-761749: {Name:mkbbc8f02c65f1e3740f70e3b6e44f341f2e91e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:38:39.623975  484563 start.go:364] duration metric: took 35.488µs to acquireMachinesLock for "newest-cni-761749"
	I1101 10:38:39.623998  484563 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:38:39.624007  484563 fix.go:54] fixHost starting: 
	I1101 10:38:39.624350  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:39.642020  484563 fix.go:112] recreateIfNeeded on newest-cni-761749: state=Stopped err=<nil>
	W1101 10:38:39.642047  484563 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:38:35.732258  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:38.231755  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:40.232491  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:39.645338  484563 out.go:252] * Restarting existing docker container for "newest-cni-761749" ...
	I1101 10:38:39.645430  484563 cli_runner.go:164] Run: docker start newest-cni-761749
	I1101 10:38:39.921415  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:39.945905  484563 kic.go:430] container "newest-cni-761749" state is running.
	I1101 10:38:39.946279  484563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:39.969820  484563 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/config.json ...
	I1101 10:38:39.970049  484563 machine.go:94] provisionDockerMachine start ...
	I1101 10:38:39.970109  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:39.991857  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:39.992553  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:39.992582  484563 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:38:39.994771  484563 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39040->127.0.0.1:33450: read: connection reset by peer
	I1101 10:38:43.149549  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-761749
	
	I1101 10:38:43.149575  484563 ubuntu.go:182] provisioning hostname "newest-cni-761749"
	I1101 10:38:43.149664  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:43.172331  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:43.172643  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:43.172660  484563 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-761749 && echo "newest-cni-761749" | sudo tee /etc/hostname
	I1101 10:38:43.336353  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-761749
	
	I1101 10:38:43.336479  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:43.356550  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:43.356862  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:43.356878  484563 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-761749' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-761749/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-761749' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:38:43.510375  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:38:43.510404  484563 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:38:43.510428  484563 ubuntu.go:190] setting up certificates
	I1101 10:38:43.510446  484563 provision.go:84] configureAuth start
	I1101 10:38:43.510522  484563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:43.528959  484563 provision.go:143] copyHostCerts
	I1101 10:38:43.529048  484563 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:38:43.529068  484563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:38:43.529166  484563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:38:43.529286  484563 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:38:43.529299  484563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:38:43.529333  484563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:38:43.529426  484563 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:38:43.529438  484563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:38:43.529479  484563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:38:43.529552  484563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.newest-cni-761749 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-761749]
	I1101 10:38:44.113512  484563 provision.go:177] copyRemoteCerts
	I1101 10:38:44.113610  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:38:44.113675  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.131710  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.238514  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:38:44.256546  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:38:44.275881  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:38:44.303245  484563 provision.go:87] duration metric: took 792.773225ms to configureAuth
	I1101 10:38:44.303272  484563 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:38:44.303483  484563 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:44.303590  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.322212  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:44.322526  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:44.322546  484563 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1101 10:38:42.732191  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:45.234071  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:44.622724  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:38:44.622790  484563 machine.go:97] duration metric: took 4.652732465s to provisionDockerMachine
	I1101 10:38:44.622806  484563 start.go:293] postStartSetup for "newest-cni-761749" (driver="docker")
	I1101 10:38:44.622817  484563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:38:44.622913  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:38:44.622958  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.642026  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.750354  484563 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:38:44.754548  484563 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:38:44.754576  484563 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:38:44.754587  484563 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:38:44.754647  484563 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:38:44.754735  484563 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:38:44.754839  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:38:44.763939  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:38:44.784636  484563 start.go:296] duration metric: took 161.813814ms for postStartSetup
	I1101 10:38:44.784735  484563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:38:44.784780  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.802216  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.908521  484563 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:38:44.913394  484563 fix.go:56] duration metric: took 5.289380019s for fixHost
	I1101 10:38:44.913418  484563 start.go:83] releasing machines lock for "newest-cni-761749", held for 5.289429826s
	I1101 10:38:44.913498  484563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:44.930512  484563 ssh_runner.go:195] Run: cat /version.json
	I1101 10:38:44.930572  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.930871  484563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:38:44.930937  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.950674  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.963879  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:45.252302  484563 ssh_runner.go:195] Run: systemctl --version
	I1101 10:38:45.262558  484563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:38:45.336279  484563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:38:45.342222  484563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:38:45.342299  484563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:38:45.352867  484563 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:38:45.352908  484563 start.go:496] detecting cgroup driver to use...
	I1101 10:38:45.352976  484563 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:38:45.353057  484563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:38:45.373242  484563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:38:45.394512  484563 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:38:45.394588  484563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:38:45.417203  484563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:38:45.438615  484563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:38:45.590047  484563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:38:45.743848  484563 docker.go:234] disabling docker service ...
	I1101 10:38:45.744008  484563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:38:45.767987  484563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:38:45.782983  484563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:38:45.962814  484563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:38:46.126117  484563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:38:46.149830  484563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:38:46.179257  484563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:38:46.179336  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.196552  484563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:38:46.196638  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.206305  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.216430  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.226931  484563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:38:46.237000  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.246883  484563 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.256657  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.267521  484563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:38:46.275521  484563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:38:46.282901  484563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:46.407343  484563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:38:46.530548  484563 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:38:46.530664  484563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:38:46.534806  484563 start.go:564] Will wait 60s for crictl version
	I1101 10:38:46.534901  484563 ssh_runner.go:195] Run: which crictl
	I1101 10:38:46.538455  484563 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:38:46.563072  484563 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:38:46.563175  484563 ssh_runner.go:195] Run: crio --version
	I1101 10:38:46.591515  484563 ssh_runner.go:195] Run: crio --version
	I1101 10:38:46.624776  484563 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:38:46.627856  484563 cli_runner.go:164] Run: docker network inspect newest-cni-761749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:38:46.644268  484563 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:38:46.648330  484563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:38:46.666545  484563 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:38:46.669479  484563 kubeadm.go:884] updating cluster {Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:38:46.669612  484563 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:38:46.669722  484563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:38:46.704158  484563 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:38:46.704186  484563 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:38:46.704246  484563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:38:46.730433  484563 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:38:46.730458  484563 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:38:46.730467  484563 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:38:46.730570  484563 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-761749 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:38:46.730659  484563 ssh_runner.go:195] Run: crio config
	I1101 10:38:46.822263  484563 cni.go:84] Creating CNI manager for ""
	I1101 10:38:46.822297  484563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:38:46.822310  484563 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:38:46.822335  484563 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-761749 NodeName:newest-cni-761749 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:38:46.822479  484563 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-761749"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:38:46.822563  484563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:38:46.835713  484563 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:38:46.835822  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:38:46.846259  484563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:38:46.861049  484563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:38:46.874167  484563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 10:38:46.887234  484563 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:38:46.891045  484563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:38:46.901353  484563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:47.023234  484563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:38:47.044268  484563 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749 for IP: 192.168.85.2
	I1101 10:38:47.044300  484563 certs.go:195] generating shared ca certs ...
	I1101 10:38:47.044338  484563 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.044559  484563 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:38:47.044631  484563 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:38:47.044645  484563 certs.go:257] generating profile certs ...
	I1101 10:38:47.044758  484563 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/client.key
	I1101 10:38:47.044870  484563 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key.6f5a246d
	I1101 10:38:47.044947  484563 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.key
	I1101 10:38:47.045096  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:38:47.045158  484563 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:38:47.045175  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:38:47.045226  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:38:47.045270  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:38:47.045329  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:38:47.045397  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:38:47.046200  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:38:47.064415  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:38:47.081836  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:38:47.099624  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:38:47.117450  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:38:47.136819  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:38:47.160266  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:38:47.190759  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:38:47.212958  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:38:47.240289  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:38:47.265449  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:38:47.285751  484563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:38:47.308157  484563 ssh_runner.go:195] Run: openssl version
	I1101 10:38:47.314558  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:38:47.324099  484563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:47.328030  484563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:47.328148  484563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:47.391207  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:38:47.401746  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:38:47.410355  484563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:38:47.414312  484563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:38:47.414374  484563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:38:47.456845  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:38:47.465162  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:38:47.473840  484563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:38:47.478063  484563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:38:47.478184  484563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:38:47.519316  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:38:47.527779  484563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:38:47.531750  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:38:47.577023  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:38:47.620010  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:38:47.663101  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:38:47.713848  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:38:47.765080  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:38:47.819911  484563 kubeadm.go:401] StartCluster: {Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:38:47.820050  484563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:38:47.820146  484563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:38:47.922806  484563 cri.go:89] found id: "de93a1f63a9d3c4fe900f5766c8143f4f0cfc5c264276ad60ac51ab1a84988d3"
	I1101 10:38:47.922871  484563 cri.go:89] found id: "414d6f893c68b755fc729b16f2cd8b4e936d00bdbbb7ae6fafe5a9d7fda62635"
	I1101 10:38:47.922902  484563 cri.go:89] found id: "b9f553ff342098fd441b42aa1e52310fae9a2b1952ea819220331db38af305bf"
	I1101 10:38:47.922985  484563 cri.go:89] found id: "8e311efa9f61ff9f631155480b75fb70507dd1cd49a022969169b03774e7d150"
	I1101 10:38:47.923009  484563 cri.go:89] found id: ""
	I1101 10:38:47.923078  484563 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:38:47.947198  484563 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:38:47Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:38:47.947337  484563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:38:47.959160  484563 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:38:47.959231  484563 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:38:47.959303  484563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:38:47.975928  484563 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:38:47.976553  484563 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-761749" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:47.976847  484563 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-761749" cluster setting kubeconfig missing "newest-cni-761749" context setting]
	I1101 10:38:47.977318  484563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.978789  484563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:38:47.998019  484563 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:38:47.998092  484563 kubeadm.go:602] duration metric: took 38.840815ms to restartPrimaryControlPlane
	I1101 10:38:47.998118  484563 kubeadm.go:403] duration metric: took 178.215958ms to StartCluster
	I1101 10:38:47.998147  484563 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.998232  484563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:47.999204  484563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.999476  484563 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:38:47.999852  484563 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:47.999926  484563 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:38:48.000005  484563 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-761749"
	I1101 10:38:48.000021  484563 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-761749"
	W1101 10:38:48.000028  484563 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:38:48.000051  484563 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:48.000545  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.000954  484563 addons.go:70] Setting dashboard=true in profile "newest-cni-761749"
	I1101 10:38:48.000998  484563 addons.go:239] Setting addon dashboard=true in "newest-cni-761749"
	W1101 10:38:48.001030  484563 addons.go:248] addon dashboard should already be in state true
	I1101 10:38:48.001077  484563 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:48.001215  484563 addons.go:70] Setting default-storageclass=true in profile "newest-cni-761749"
	I1101 10:38:48.001230  484563 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-761749"
	I1101 10:38:48.001509  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.002410  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.008079  484563 out.go:179] * Verifying Kubernetes components...
	I1101 10:38:48.013179  484563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:48.055991  484563 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:38:48.058957  484563 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:38:48.058982  484563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:38:48.059053  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:48.070894  484563 addons.go:239] Setting addon default-storageclass=true in "newest-cni-761749"
	W1101 10:38:48.070919  484563 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:38:48.070946  484563 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:48.071361  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.075882  484563 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:38:48.078821  484563 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:38:45.732277  477629 node_ready.go:49] node "default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:45.732306  477629 node_ready.go:38] duration metric: took 39.504103123s for node "default-k8s-diff-port-245904" to be "Ready" ...
	I1101 10:38:45.732320  477629 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:38:45.732374  477629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:38:45.762283  477629 api_server.go:72] duration metric: took 40.806706118s to wait for apiserver process to appear ...
	I1101 10:38:45.762306  477629 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:38:45.762336  477629 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:38:45.773094  477629 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 10:38:45.778354  477629 api_server.go:141] control plane version: v1.34.1
	I1101 10:38:45.778380  477629 api_server.go:131] duration metric: took 16.066881ms to wait for apiserver health ...
	I1101 10:38:45.778389  477629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:38:45.788072  477629 system_pods.go:59] 8 kube-system pods found
	I1101 10:38:45.788132  477629 system_pods.go:61] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:45.788140  477629 system_pods.go:61] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:45.788146  477629 system_pods.go:61] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:45.788150  477629 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:45.788155  477629 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:45.788173  477629 system_pods.go:61] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:45.788177  477629 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:45.788184  477629 system_pods.go:61] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:45.788191  477629 system_pods.go:74] duration metric: took 9.797424ms to wait for pod list to return data ...
	I1101 10:38:45.788206  477629 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:38:45.799144  477629 default_sa.go:45] found service account: "default"
	I1101 10:38:45.799169  477629 default_sa.go:55] duration metric: took 10.95587ms for default service account to be created ...
	I1101 10:38:45.799185  477629 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:38:45.807183  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:45.807214  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:45.807221  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:45.807229  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:45.807234  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:45.807239  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:45.807243  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:45.807247  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:45.807252  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:45.807274  477629 retry.go:31] will retry after 310.68281ms: missing components: kube-dns
	I1101 10:38:46.136392  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:46.136430  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:46.136437  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:46.136446  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:46.136450  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:46.136454  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:46.136458  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:46.136463  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:46.136469  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:46.136487  477629 retry.go:31] will retry after 306.636472ms: missing components: kube-dns
	I1101 10:38:46.447474  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:46.447510  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:46.447517  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:46.447524  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:46.447529  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:46.447533  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:46.447537  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:46.447542  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:46.447548  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:46.447561  477629 retry.go:31] will retry after 319.925041ms: missing components: kube-dns
	I1101 10:38:46.772305  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:46.772339  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:46.772347  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:46.772353  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:46.772357  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:46.772361  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:46.772365  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:46.772369  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:46.772375  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:46.772389  477629 retry.go:31] will retry after 564.006275ms: missing components: kube-dns
	I1101 10:38:47.341207  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:47.341234  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Running
	I1101 10:38:47.341242  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:47.341248  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:47.341253  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:47.341258  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:47.341262  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:47.341266  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:47.341270  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Running
	I1101 10:38:47.341277  477629 system_pods.go:126] duration metric: took 1.54208615s to wait for k8s-apps to be running ...
	I1101 10:38:47.341284  477629 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:38:47.341341  477629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:38:47.357836  477629 system_svc.go:56] duration metric: took 16.542098ms WaitForService to wait for kubelet
	I1101 10:38:47.357861  477629 kubeadm.go:587] duration metric: took 42.402290232s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:38:47.357880  477629 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:38:47.361122  477629 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:38:47.361194  477629 node_conditions.go:123] node cpu capacity is 2
	I1101 10:38:47.361224  477629 node_conditions.go:105] duration metric: took 3.336874ms to run NodePressure ...
	I1101 10:38:47.361249  477629 start.go:242] waiting for startup goroutines ...
	I1101 10:38:47.361281  477629 start.go:247] waiting for cluster config update ...
	I1101 10:38:47.361311  477629 start.go:256] writing updated cluster config ...
	I1101 10:38:47.361638  477629 ssh_runner.go:195] Run: rm -f paused
	I1101 10:38:47.366602  477629 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:38:47.370670  477629 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h2552" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.376318  477629 pod_ready.go:94] pod "coredns-66bc5c9577-h2552" is "Ready"
	I1101 10:38:47.376383  477629 pod_ready.go:86] duration metric: took 5.693233ms for pod "coredns-66bc5c9577-h2552" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.379098  477629 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.384645  477629 pod_ready.go:94] pod "etcd-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:47.384719  477629 pod_ready.go:86] duration metric: took 5.55184ms for pod "etcd-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.387276  477629 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.392529  477629 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:47.392596  477629 pod_ready.go:86] duration metric: took 5.257927ms for pod "kube-apiserver-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.398622  477629 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.772080  477629 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:47.772159  477629 pod_ready.go:86] duration metric: took 373.468907ms for pod "kube-controller-manager-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.970970  477629 pod_ready.go:83] waiting for pod "kube-proxy-8d8hl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.370819  477629 pod_ready.go:94] pod "kube-proxy-8d8hl" is "Ready"
	I1101 10:38:48.370843  477629 pod_ready.go:86] duration metric: took 399.848762ms for pod "kube-proxy-8d8hl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.571714  477629 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.970379  477629 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:48.970405  477629 pod_ready.go:86] duration metric: took 398.666981ms for pod "kube-scheduler-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.970419  477629 pod_ready.go:40] duration metric: took 1.6037879s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:38:49.073922  477629 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:38:49.077321  477629 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-245904" cluster and "default" namespace by default
	I1101 10:38:48.081663  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:38:48.081803  484563 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:38:48.081886  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:48.113837  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:48.128947  484563 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:38:48.128971  484563 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:38:48.129049  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:48.147928  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:48.164730  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:48.366485  484563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:38:48.378639  484563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:38:48.454208  484563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:38:48.536988  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:38:48.537025  484563 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:38:48.616413  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:38:48.616441  484563 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:38:48.648980  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:38:48.649016  484563 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:38:48.675345  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:38:48.675371  484563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:38:48.701062  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:38:48.701098  484563 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:38:48.726659  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:38:48.726686  484563 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:38:48.748690  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:38:48.748725  484563 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:38:48.783214  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:38:48.783240  484563 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:38:48.801973  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:38:48.802011  484563 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:38:48.831533  484563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:38:53.985493  484563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.61897437s)
	I1101 10:38:53.985554  484563 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.606891973s)
	I1101 10:38:53.985590  484563 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:38:53.985648  484563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:38:53.985750  484563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.531517471s)
	I1101 10:38:53.986054  484563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.154487304s)
	I1101 10:38:53.989599  484563 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-761749 addons enable metrics-server
	
	I1101 10:38:54.014468  484563 api_server.go:72] duration metric: took 6.014925238s to wait for apiserver process to appear ...
	I1101 10:38:54.014490  484563 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:38:54.014509  484563 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:38:54.035042  484563 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:38:54.035077  484563 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:38:54.036343  484563 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 10:38:54.039445  484563 addons.go:515] duration metric: took 6.039496495s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:38:54.514762  484563 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:38:54.523462  484563 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:38:54.524613  484563 api_server.go:141] control plane version: v1.34.1
	I1101 10:38:54.524639  484563 api_server.go:131] duration metric: took 510.141735ms to wait for apiserver health ...
	I1101 10:38:54.524649  484563 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:38:54.528359  484563 system_pods.go:59] 8 kube-system pods found
	I1101 10:38:54.528400  484563 system_pods.go:61] "coredns-66bc5c9577-dkmh7" [4ba29de7-db66-4fb3-a494-f65c332a18fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:38:54.528410  484563 system_pods.go:61] "etcd-newest-cni-761749" [01442f80-7894-4906-bcf2-310262858f81] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:38:54.528417  484563 system_pods.go:61] "kindnet-kj78v" [9e32b217-03e3-4606-a267-3a45809b6648] Running
	I1101 10:38:54.528425  484563 system_pods.go:61] "kube-apiserver-newest-cni-761749" [11f59f30-302f-4408-8088-f1ad8a9151d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:38:54.528432  484563 system_pods.go:61] "kube-controller-manager-newest-cni-761749" [45778566-a6e7-4161-b5e3-ac477859613d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:38:54.528437  484563 system_pods.go:61] "kube-proxy-fzkf5" [865ae218-f581-4914-b55c-fdf4d5134c58] Running
	I1101 10:38:54.528445  484563 system_pods.go:61] "kube-scheduler-newest-cni-761749" [cc737524-4ed5-438e-bc67-e23969166ef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:38:54.528450  484563 system_pods.go:61] "storage-provisioner" [33de256b-6331-467e-96be-298d220b8aa8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:38:54.528456  484563 system_pods.go:74] duration metric: took 3.798642ms to wait for pod list to return data ...
	I1101 10:38:54.528470  484563 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:38:54.531366  484563 default_sa.go:45] found service account: "default"
	I1101 10:38:54.531396  484563 default_sa.go:55] duration metric: took 2.919799ms for default service account to be created ...
	I1101 10:38:54.531409  484563 kubeadm.go:587] duration metric: took 6.531873597s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:38:54.531426  484563 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:38:54.534077  484563 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:38:54.534106  484563 node_conditions.go:123] node cpu capacity is 2
	I1101 10:38:54.534119  484563 node_conditions.go:105] duration metric: took 2.688763ms to run NodePressure ...
	I1101 10:38:54.534132  484563 start.go:242] waiting for startup goroutines ...
	I1101 10:38:54.534139  484563 start.go:247] waiting for cluster config update ...
	I1101 10:38:54.534154  484563 start.go:256] writing updated cluster config ...
	I1101 10:38:54.534454  484563 ssh_runner.go:195] Run: rm -f paused
	I1101 10:38:54.627651  484563 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:38:54.631116  484563 out.go:179] * Done! kubectl is now configured to use "newest-cni-761749" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.488316116Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.494699453Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e87d7fc0-938b-4451-b0bd-2101293e92e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.498309087Z" level=info msg="Running pod sandbox: kube-system/kindnet-kj78v/POD" id=1605b298-334b-4073-b234-af9219f8a87c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.498359533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.500983819Z" level=info msg="Ran pod sandbox 52f7e0cc739402ec244c68208a6fded4c4be4b82d2a2d12de00337b56dc76829 with infra container: kube-system/kube-proxy-fzkf5/POD" id=e87d7fc0-938b-4451-b0bd-2101293e92e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.506280245Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1605b298-334b-4073-b234-af9219f8a87c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.513177652Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=97341aba-3796-4621-b906-3914f8f4a6e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.515486658Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=166e19e1-a081-4749-87bc-0bd23a0682b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.517480156Z" level=info msg="Creating container: kube-system/kube-proxy-fzkf5/kube-proxy" id=64d16fd0-de5d-4fc7-a40c-dddc4b1a2bb5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.517580375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.529868781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.543982239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.55202475Z" level=info msg="Ran pod sandbox 83c5fcc4b2dfc43bb52c0d627385db6dc2ed2c563bc66541e3d29a1ced3597ac with infra container: kube-system/kindnet-kj78v/POD" id=1605b298-334b-4073-b234-af9219f8a87c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.559327211Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=52c57e82-7613-4a53-adad-a56943c78757 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.562954001Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=71ef3feb-f9ce-4213-b2dc-3d448faea730 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.566090075Z" level=info msg="Creating container: kube-system/kindnet-kj78v/kindnet-cni" id=0d460fcf-c9b4-4957-8767-6a6933171575 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.566187324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.588812676Z" level=info msg="Created container a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4: kube-system/kube-proxy-fzkf5/kube-proxy" id=64d16fd0-de5d-4fc7-a40c-dddc4b1a2bb5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.592260756Z" level=info msg="Starting container: a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4" id=c7d0ac1a-c83a-4939-9e59-52c0411d4c0f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.598140946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.598847109Z" level=info msg="Started container" PID=1056 containerID=a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4 description=kube-system/kube-proxy-fzkf5/kube-proxy id=c7d0ac1a-c83a-4939-9e59-52c0411d4c0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=52f7e0cc739402ec244c68208a6fded4c4be4b82d2a2d12de00337b56dc76829
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.60309019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.645119328Z" level=info msg="Created container 7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8: kube-system/kindnet-kj78v/kindnet-cni" id=0d460fcf-c9b4-4957-8767-6a6933171575 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.647395021Z" level=info msg="Starting container: 7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8" id=58f78873-703c-49f5-9648-2218154ffbe3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.649613556Z" level=info msg="Started container" PID=1067 containerID=7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8 description=kube-system/kindnet-kj78v/kindnet-cni id=58f78873-703c-49f5-9648-2218154ffbe3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=83c5fcc4b2dfc43bb52c0d627385db6dc2ed2c563bc66541e3d29a1ced3597ac
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7d39017947fe9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   83c5fcc4b2dfc       kindnet-kj78v                               kube-system
	a69ca386ee8c1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   52f7e0cc73940       kube-proxy-fzkf5                            kube-system
	de93a1f63a9d3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   43a348a682329       kube-scheduler-newest-cni-761749            kube-system
	414d6f893c68b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   41841303ecb65       kube-controller-manager-newest-cni-761749   kube-system
	b9f553ff34209       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   9b2f7b0415eaa       etcd-newest-cni-761749                      kube-system
	8e311efa9f61f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   189069b9f7c53       kube-apiserver-newest-cni-761749            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-761749
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-761749
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=newest-cni-761749
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_38_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:38:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-761749
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:38:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:38:52 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:38:52 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:38:52 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:38:52 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-761749
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d919014c-b008-45f7-b1e1-0de245f57299
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-761749                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         30s
	  kube-system                 kindnet-kj78v                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-761749             250m (12%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-761749    200m (10%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-fzkf5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-761749             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 24s   kube-proxy       
	  Normal   Starting                 4s    kube-proxy       
	  Normal   Starting                 30s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 30s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  30s   kubelet          Node newest-cni-761749 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    30s   kubelet          Node newest-cni-761749 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     30s   kubelet          Node newest-cni-761749 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26s   node-controller  Node newest-cni-761749 event: Registered Node newest-cni-761749 in Controller
	  Normal   RegisteredNode           2s    node-controller  Node newest-cni-761749 event: Registered Node newest-cni-761749 in Controller
	
	
	==> dmesg <==
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	[ +26.122524] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b9f553ff342098fd441b42aa1e52310fae9a2b1952ea819220331db38af305bf] <==
	{"level":"warn","ts":"2025-11-01T10:38:50.632536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.657191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.686204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.703479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.714797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.733776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.750590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.769446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.787204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.827694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.856232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.878364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.908071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.929505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.950175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.977296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.001933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.020533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.031840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.055697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.075179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.101897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.158371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.194284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.254376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53208","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:38:59 up  2:21,  0 user,  load average: 3.68, 4.02, 3.28
	Linux newest-cni-761749 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8] <==
	I1101 10:38:53.864936       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:38:53.870673       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:38:53.870890       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:38:53.870947       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:38:53.870988       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:38:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:38:54.044087       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:38:54.046148       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:38:54.046264       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:38:54.047234       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [8e311efa9f61ff9f631155480b75fb70507dd1cd49a022969169b03774e7d150] <==
	I1101 10:38:52.632947       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:38:52.641994       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:38:52.642016       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:38:52.642023       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:38:52.642031       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:38:52.642184       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:38:52.642208       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:38:52.642255       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:38:52.642287       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:38:52.642292       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:38:52.645399       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:38:52.647922       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:38:52.693827       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:38:53.234684       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:38:53.330086       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:38:53.413815       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:38:53.534454       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:38:53.626718       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:38:53.665919       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:38:53.915525       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.244.247"}
	I1101 10:38:53.942541       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.139.122"}
	I1101 10:38:56.244830       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:38:56.423345       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:38:56.474354       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:38:56.632458       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [414d6f893c68b755fc729b16f2cd8b4e936d00bdbbb7ae6fafe5a9d7fda62635] <==
	I1101 10:38:56.068603       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:38:56.072823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:38:56.073218       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:38:56.075001       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:38:56.075105       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:38:56.079448       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:38:56.082731       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:38:56.089601       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:38:56.093119       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:38:56.097444       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:38:56.102796       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:38:56.107228       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:38:56.108845       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:38:56.119241       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:38:56.123602       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:38:56.131959       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:38:56.151150       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:38:56.153835       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:38:56.153902       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:38:56.159809       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:38:56.167209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:38:56.167239       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:38:56.167248       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:38:56.173833       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:38:56.176130       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4] <==
	I1101 10:38:54.154763       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:38:54.231351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:38:54.243899       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:38:54.243996       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:38:54.244085       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:38:54.278686       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:38:54.278811       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:38:54.298120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:38:54.298636       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:38:54.298663       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:38:54.330999       1 config.go:200] "Starting service config controller"
	I1101 10:38:54.331077       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:38:54.331125       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:38:54.331153       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:38:54.331187       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:38:54.331214       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:38:54.332833       1 config.go:309] "Starting node config controller"
	I1101 10:38:54.334300       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:38:54.334356       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:38:54.432054       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:38:54.432161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:38:54.432175       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [de93a1f63a9d3c4fe900f5766c8143f4f0cfc5c264276ad60ac51ab1a84988d3] <==
	I1101 10:38:54.063908       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:38:56.142148       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:38:56.142256       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:38:56.149804       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:38:56.149942       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:38:56.149991       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:38:56.150039       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:38:56.179156       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:38:56.179262       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:38:56.179521       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:38:56.179581       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:38:56.250362       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:38:56.280801       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:38:56.281017       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.447447     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.477254     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.666395     727 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.666518     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.666547     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.667520     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: E1101 10:38:52.681787     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-761749\" already exists" pod="kube-system/kube-controller-manager-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.681820     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: E1101 10:38:52.699846     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-761749\" already exists" pod="kube-system/kube-scheduler-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: E1101 10:38:52.712488     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-761749\" already exists" pod="kube-system/kube-scheduler-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.712524     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: E1101 10:38:52.728450     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-761749\" already exists" pod="kube-system/etcd-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.728486     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: E1101 10:38:52.741038     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-761749\" already exists" pod="kube-system/kube-apiserver-newest-cni-761749"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.164597     727 apiserver.go:52] "Watching apiserver"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.277706     727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.294185     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e32b217-03e3-4606-a267-3a45809b6648-xtables-lock\") pod \"kindnet-kj78v\" (UID: \"9e32b217-03e3-4606-a267-3a45809b6648\") " pod="kube-system/kindnet-kj78v"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.294267     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/865ae218-f581-4914-b55c-fdf4d5134c58-lib-modules\") pod \"kube-proxy-fzkf5\" (UID: \"865ae218-f581-4914-b55c-fdf4d5134c58\") " pod="kube-system/kube-proxy-fzkf5"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.294342     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e32b217-03e3-4606-a267-3a45809b6648-lib-modules\") pod \"kindnet-kj78v\" (UID: \"9e32b217-03e3-4606-a267-3a45809b6648\") " pod="kube-system/kindnet-kj78v"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.294364     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/865ae218-f581-4914-b55c-fdf4d5134c58-xtables-lock\") pod \"kube-proxy-fzkf5\" (UID: \"865ae218-f581-4914-b55c-fdf4d5134c58\") " pod="kube-system/kube-proxy-fzkf5"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.294385     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9e32b217-03e3-4606-a267-3a45809b6648-cni-cfg\") pod \"kindnet-kj78v\" (UID: \"9e32b217-03e3-4606-a267-3a45809b6648\") " pod="kube-system/kindnet-kj78v"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.350845     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:38:56 newest-cni-761749 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:38:56 newest-cni-761749 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:38:56 newest-cni-761749 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-761749 -n newest-cni-761749
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-761749 -n newest-cni-761749: exit status 2 (587.949757ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-761749 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-dkmh7 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bpz2x kubernetes-dashboard-855c9754f9-xknbx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-761749 describe pod coredns-66bc5c9577-dkmh7 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bpz2x kubernetes-dashboard-855c9754f9-xknbx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-761749 describe pod coredns-66bc5c9577-dkmh7 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bpz2x kubernetes-dashboard-855c9754f9-xknbx: exit status 1 (134.672868ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-dkmh7" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-bpz2x" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xknbx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-761749 describe pod coredns-66bc5c9577-dkmh7 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bpz2x kubernetes-dashboard-855c9754f9-xknbx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-761749
helpers_test.go:243: (dbg) docker inspect newest-cni-761749:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e",
	        "Created": "2025-11-01T10:38:01.36860666Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484693,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:38:39.67860239Z",
	            "FinishedAt": "2025-11-01T10:38:38.709089471Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/hosts",
	        "LogPath": "/var/lib/docker/containers/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e/b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e-json.log",
	        "Name": "/newest-cni-761749",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-761749:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-761749",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0ea1613e7b923949b25e09b765d65247fec98e6d7b2befa3aac43a3b7bfd11e",
	                "LowerDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/efe3a9fc6c5faaa365f8372f247b368587a4099e386abc11712bab10bf8462f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-761749",
	                "Source": "/var/lib/docker/volumes/newest-cni-761749/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-761749",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-761749",
	                "name.minikube.sigs.k8s.io": "newest-cni-761749",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ed1504feddf2b20dc6c25467247fa776c737289e581d25e60278d67c81a2ea1",
	            "SandboxKey": "/var/run/docker/netns/7ed1504feddf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-761749": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:3e:72:ce:41:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5c14f0066ec7c912b0be843273782822de5f27a5f2c689449899d5fe3a845a2",
	                    "EndpointID": "c3cbc1529306f73cf22c158107d6c00a5d1f610fc4f490dbaadd70d5db269086",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-761749",
	                        "b0ea1613e7b9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-761749 -n newest-cni-761749
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-761749 -n newest-cni-761749: exit status 2 (436.338367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-761749 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-761749 logs -n 25: (1.435450927s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-170467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-170467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ stop    │ -p embed-certs-618070 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-618070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ image   │ no-preload-170467 image list --format=json                                                                                                                                                                                                    │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p no-preload-170467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p disable-driver-mounts-416512                                                                                                                                                                                                               │ disable-driver-mounts-416512 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ image   │ embed-certs-618070 image list --format=json                                                                                                                                                                                                   │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-618070 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p newest-cni-761749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ stop    │ -p newest-cni-761749 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-761749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ image   │ newest-cni-761749 image list --format=json                                                                                                                                                                                                    │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-761749 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-245904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:38:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:38:39.399270  484563 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:38:39.399466  484563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:38:39.399495  484563 out.go:374] Setting ErrFile to fd 2...
	I1101 10:38:39.399516  484563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:38:39.399817  484563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:38:39.400255  484563 out.go:368] Setting JSON to false
	I1101 10:38:39.401281  484563 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8469,"bootTime":1761985051,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:38:39.401381  484563 start.go:143] virtualization:  
	I1101 10:38:39.406465  484563 out.go:179] * [newest-cni-761749] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:38:39.409727  484563 notify.go:221] Checking for updates...
	I1101 10:38:39.410613  484563 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:38:39.413803  484563 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:38:39.416769  484563 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:39.419641  484563 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:38:39.422574  484563 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:38:39.425407  484563 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:38:39.429341  484563 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:39.429959  484563 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:38:39.463477  484563 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:38:39.463592  484563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:38:39.521776  484563 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:38:39.511848177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:38:39.521889  484563 docker.go:319] overlay module found
	I1101 10:38:39.525055  484563 out.go:179] * Using the docker driver based on existing profile
	I1101 10:38:39.527855  484563 start.go:309] selected driver: docker
	I1101 10:38:39.527878  484563 start.go:930] validating driver "docker" against &{Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:38:39.527989  484563 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:38:39.528718  484563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:38:39.590420  484563 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:38:39.581000966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:38:39.590774  484563 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:38:39.590815  484563 cni.go:84] Creating CNI manager for ""
	I1101 10:38:39.590877  484563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:38:39.590958  484563 start.go:353] cluster config:
	{Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:38:39.595980  484563 out.go:179] * Starting "newest-cni-761749" primary control-plane node in "newest-cni-761749" cluster
	I1101 10:38:39.598792  484563 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:38:39.601801  484563 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:38:39.604630  484563 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:38:39.604684  484563 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:38:39.604697  484563 cache.go:59] Caching tarball of preloaded images
	I1101 10:38:39.604726  484563 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:38:39.604800  484563 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:38:39.604811  484563 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:38:39.604967  484563 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/config.json ...
	I1101 10:38:39.623857  484563 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:38:39.623882  484563 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:38:39.623895  484563 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:38:39.623917  484563 start.go:360] acquireMachinesLock for newest-cni-761749: {Name:mkbbc8f02c65f1e3740f70e3b6e44f341f2e91e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:38:39.623975  484563 start.go:364] duration metric: took 35.488µs to acquireMachinesLock for "newest-cni-761749"
	I1101 10:38:39.623998  484563 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:38:39.624007  484563 fix.go:54] fixHost starting: 
	I1101 10:38:39.624350  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:39.642020  484563 fix.go:112] recreateIfNeeded on newest-cni-761749: state=Stopped err=<nil>
	W1101 10:38:39.642047  484563 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:38:35.732258  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:38.231755  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:40.232491  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:39.645338  484563 out.go:252] * Restarting existing docker container for "newest-cni-761749" ...
	I1101 10:38:39.645430  484563 cli_runner.go:164] Run: docker start newest-cni-761749
	I1101 10:38:39.921415  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:39.945905  484563 kic.go:430] container "newest-cni-761749" state is running.
	I1101 10:38:39.946279  484563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:39.969820  484563 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/config.json ...
	I1101 10:38:39.970049  484563 machine.go:94] provisionDockerMachine start ...
	I1101 10:38:39.970109  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:39.991857  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:39.992553  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:39.992582  484563 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:38:39.994771  484563 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39040->127.0.0.1:33450: read: connection reset by peer
	I1101 10:38:43.149549  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-761749
	
	I1101 10:38:43.149575  484563 ubuntu.go:182] provisioning hostname "newest-cni-761749"
	I1101 10:38:43.149664  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:43.172331  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:43.172643  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:43.172660  484563 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-761749 && echo "newest-cni-761749" | sudo tee /etc/hostname
	I1101 10:38:43.336353  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-761749
	
	I1101 10:38:43.336479  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:43.356550  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:43.356862  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:43.356878  484563 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-761749' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-761749/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-761749' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:38:43.510375  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:38:43.510404  484563 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:38:43.510428  484563 ubuntu.go:190] setting up certificates
	I1101 10:38:43.510446  484563 provision.go:84] configureAuth start
	I1101 10:38:43.510522  484563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:43.528959  484563 provision.go:143] copyHostCerts
	I1101 10:38:43.529048  484563 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:38:43.529068  484563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:38:43.529166  484563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:38:43.529286  484563 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:38:43.529299  484563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:38:43.529333  484563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:38:43.529426  484563 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:38:43.529438  484563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:38:43.529479  484563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:38:43.529552  484563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.newest-cni-761749 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-761749]
	I1101 10:38:44.113512  484563 provision.go:177] copyRemoteCerts
	I1101 10:38:44.113610  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:38:44.113675  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.131710  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.238514  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:38:44.256546  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:38:44.275881  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:38:44.303245  484563 provision.go:87] duration metric: took 792.773225ms to configureAuth
	I1101 10:38:44.303272  484563 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:38:44.303483  484563 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:44.303590  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.322212  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:44.322526  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:44.322546  484563 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1101 10:38:42.732191  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:45.234071  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:44.622724  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:38:44.622790  484563 machine.go:97] duration metric: took 4.652732465s to provisionDockerMachine
	I1101 10:38:44.622806  484563 start.go:293] postStartSetup for "newest-cni-761749" (driver="docker")
	I1101 10:38:44.622817  484563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:38:44.622913  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:38:44.622958  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.642026  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.750354  484563 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:38:44.754548  484563 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:38:44.754576  484563 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:38:44.754587  484563 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:38:44.754647  484563 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:38:44.754735  484563 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:38:44.754839  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:38:44.763939  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:38:44.784636  484563 start.go:296] duration metric: took 161.813814ms for postStartSetup
	I1101 10:38:44.784735  484563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:38:44.784780  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.802216  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.908521  484563 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:38:44.913394  484563 fix.go:56] duration metric: took 5.289380019s for fixHost
	I1101 10:38:44.913418  484563 start.go:83] releasing machines lock for "newest-cni-761749", held for 5.289429826s
	I1101 10:38:44.913498  484563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:44.930512  484563 ssh_runner.go:195] Run: cat /version.json
	I1101 10:38:44.930572  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.930871  484563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:38:44.930937  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.950674  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.963879  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:45.252302  484563 ssh_runner.go:195] Run: systemctl --version
	I1101 10:38:45.262558  484563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:38:45.336279  484563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:38:45.342222  484563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:38:45.342299  484563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:38:45.352867  484563 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:38:45.352908  484563 start.go:496] detecting cgroup driver to use...
	I1101 10:38:45.352976  484563 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:38:45.353057  484563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:38:45.373242  484563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:38:45.394512  484563 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:38:45.394588  484563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:38:45.417203  484563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:38:45.438615  484563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:38:45.590047  484563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:38:45.743848  484563 docker.go:234] disabling docker service ...
	I1101 10:38:45.744008  484563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:38:45.767987  484563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:38:45.782983  484563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:38:45.962814  484563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:38:46.126117  484563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:38:46.149830  484563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:38:46.179257  484563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:38:46.179336  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.196552  484563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:38:46.196638  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.206305  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.216430  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.226931  484563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:38:46.237000  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.246883  484563 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.256657  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.267521  484563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:38:46.275521  484563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:38:46.282901  484563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:46.407343  484563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:38:46.530548  484563 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:38:46.530664  484563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:38:46.534806  484563 start.go:564] Will wait 60s for crictl version
	I1101 10:38:46.534901  484563 ssh_runner.go:195] Run: which crictl
	I1101 10:38:46.538455  484563 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:38:46.563072  484563 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:38:46.563175  484563 ssh_runner.go:195] Run: crio --version
	I1101 10:38:46.591515  484563 ssh_runner.go:195] Run: crio --version
	I1101 10:38:46.624776  484563 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:38:46.627856  484563 cli_runner.go:164] Run: docker network inspect newest-cni-761749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:38:46.644268  484563 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:38:46.648330  484563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:38:46.666545  484563 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:38:46.669479  484563 kubeadm.go:884] updating cluster {Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:38:46.669612  484563 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:38:46.669722  484563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:38:46.704158  484563 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:38:46.704186  484563 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:38:46.704246  484563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:38:46.730433  484563 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:38:46.730458  484563 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:38:46.730467  484563 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:38:46.730570  484563 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-761749 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:38:46.730659  484563 ssh_runner.go:195] Run: crio config
	I1101 10:38:46.822263  484563 cni.go:84] Creating CNI manager for ""
	I1101 10:38:46.822297  484563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:38:46.822310  484563 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:38:46.822335  484563 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-761749 NodeName:newest-cni-761749 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:38:46.822479  484563 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-761749"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:38:46.822563  484563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:38:46.835713  484563 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:38:46.835822  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:38:46.846259  484563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:38:46.861049  484563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:38:46.874167  484563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 10:38:46.887234  484563 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:38:46.891045  484563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:38:46.901353  484563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:47.023234  484563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:38:47.044268  484563 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749 for IP: 192.168.85.2
	I1101 10:38:47.044300  484563 certs.go:195] generating shared ca certs ...
	I1101 10:38:47.044338  484563 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.044559  484563 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:38:47.044631  484563 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:38:47.044645  484563 certs.go:257] generating profile certs ...
	I1101 10:38:47.044758  484563 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/client.key
	I1101 10:38:47.044870  484563 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key.6f5a246d
	I1101 10:38:47.044947  484563 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.key
	I1101 10:38:47.045096  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:38:47.045158  484563 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:38:47.045175  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:38:47.045226  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:38:47.045270  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:38:47.045329  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:38:47.045397  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:38:47.046200  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:38:47.064415  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:38:47.081836  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:38:47.099624  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:38:47.117450  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:38:47.136819  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:38:47.160266  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:38:47.190759  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:38:47.212958  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:38:47.240289  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:38:47.265449  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:38:47.285751  484563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:38:47.308157  484563 ssh_runner.go:195] Run: openssl version
	I1101 10:38:47.314558  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:38:47.324099  484563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:47.328030  484563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:47.328148  484563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:47.391207  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:38:47.401746  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:38:47.410355  484563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:38:47.414312  484563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:38:47.414374  484563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:38:47.456845  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:38:47.465162  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:38:47.473840  484563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:38:47.478063  484563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:38:47.478184  484563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:38:47.519316  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:38:47.527779  484563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:38:47.531750  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:38:47.577023  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:38:47.620010  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:38:47.663101  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:38:47.713848  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:38:47.765080  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:38:47.819911  484563 kubeadm.go:401] StartCluster: {Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:38:47.820050  484563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:38:47.820146  484563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:38:47.922806  484563 cri.go:89] found id: "de93a1f63a9d3c4fe900f5766c8143f4f0cfc5c264276ad60ac51ab1a84988d3"
	I1101 10:38:47.922871  484563 cri.go:89] found id: "414d6f893c68b755fc729b16f2cd8b4e936d00bdbbb7ae6fafe5a9d7fda62635"
	I1101 10:38:47.922902  484563 cri.go:89] found id: "b9f553ff342098fd441b42aa1e52310fae9a2b1952ea819220331db38af305bf"
	I1101 10:38:47.922985  484563 cri.go:89] found id: "8e311efa9f61ff9f631155480b75fb70507dd1cd49a022969169b03774e7d150"
	I1101 10:38:47.923009  484563 cri.go:89] found id: ""
	I1101 10:38:47.923078  484563 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:38:47.947198  484563 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:38:47Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:38:47.947337  484563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:38:47.959160  484563 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:38:47.959231  484563 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:38:47.959303  484563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:38:47.975928  484563 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:38:47.976553  484563 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-761749" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:47.976847  484563 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-761749" cluster setting kubeconfig missing "newest-cni-761749" context setting]
	I1101 10:38:47.977318  484563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.978789  484563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:38:47.998019  484563 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:38:47.998092  484563 kubeadm.go:602] duration metric: took 38.840815ms to restartPrimaryControlPlane
	I1101 10:38:47.998118  484563 kubeadm.go:403] duration metric: took 178.215958ms to StartCluster
	I1101 10:38:47.998147  484563 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.998232  484563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:47.999204  484563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.999476  484563 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:38:47.999852  484563 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:47.999926  484563 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:38:48.000005  484563 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-761749"
	I1101 10:38:48.000021  484563 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-761749"
	W1101 10:38:48.000028  484563 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:38:48.000051  484563 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:48.000545  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.000954  484563 addons.go:70] Setting dashboard=true in profile "newest-cni-761749"
	I1101 10:38:48.000998  484563 addons.go:239] Setting addon dashboard=true in "newest-cni-761749"
	W1101 10:38:48.001030  484563 addons.go:248] addon dashboard should already be in state true
	I1101 10:38:48.001077  484563 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:48.001215  484563 addons.go:70] Setting default-storageclass=true in profile "newest-cni-761749"
	I1101 10:38:48.001230  484563 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-761749"
	I1101 10:38:48.001509  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.002410  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.008079  484563 out.go:179] * Verifying Kubernetes components...
	I1101 10:38:48.013179  484563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:48.055991  484563 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:38:48.058957  484563 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:38:48.058982  484563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:38:48.059053  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:48.070894  484563 addons.go:239] Setting addon default-storageclass=true in "newest-cni-761749"
	W1101 10:38:48.070919  484563 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:38:48.070946  484563 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:48.071361  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.075882  484563 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:38:48.078821  484563 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:38:45.732277  477629 node_ready.go:49] node "default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:45.732306  477629 node_ready.go:38] duration metric: took 39.504103123s for node "default-k8s-diff-port-245904" to be "Ready" ...
	I1101 10:38:45.732320  477629 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:38:45.732374  477629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:38:45.762283  477629 api_server.go:72] duration metric: took 40.806706118s to wait for apiserver process to appear ...
	I1101 10:38:45.762306  477629 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:38:45.762336  477629 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:38:45.773094  477629 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 10:38:45.778354  477629 api_server.go:141] control plane version: v1.34.1
	I1101 10:38:45.778380  477629 api_server.go:131] duration metric: took 16.066881ms to wait for apiserver health ...
	I1101 10:38:45.778389  477629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:38:45.788072  477629 system_pods.go:59] 8 kube-system pods found
	I1101 10:38:45.788132  477629 system_pods.go:61] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:45.788140  477629 system_pods.go:61] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:45.788146  477629 system_pods.go:61] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:45.788150  477629 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:45.788155  477629 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:45.788173  477629 system_pods.go:61] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:45.788177  477629 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:45.788184  477629 system_pods.go:61] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:45.788191  477629 system_pods.go:74] duration metric: took 9.797424ms to wait for pod list to return data ...
	I1101 10:38:45.788206  477629 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:38:45.799144  477629 default_sa.go:45] found service account: "default"
	I1101 10:38:45.799169  477629 default_sa.go:55] duration metric: took 10.95587ms for default service account to be created ...
	I1101 10:38:45.799185  477629 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:38:45.807183  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:45.807214  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:45.807221  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:45.807229  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:45.807234  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:45.807239  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:45.807243  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:45.807247  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:45.807252  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:45.807274  477629 retry.go:31] will retry after 310.68281ms: missing components: kube-dns
	I1101 10:38:46.136392  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:46.136430  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:46.136437  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:46.136446  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:46.136450  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:46.136454  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:46.136458  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:46.136463  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:46.136469  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:46.136487  477629 retry.go:31] will retry after 306.636472ms: missing components: kube-dns
	I1101 10:38:46.447474  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:46.447510  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:46.447517  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:46.447524  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:46.447529  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:46.447533  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:46.447537  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:46.447542  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:46.447548  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:46.447561  477629 retry.go:31] will retry after 319.925041ms: missing components: kube-dns
	I1101 10:38:46.772305  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:46.772339  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:46.772347  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:46.772353  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:46.772357  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:46.772361  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:46.772365  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:46.772369  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:46.772375  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:46.772389  477629 retry.go:31] will retry after 564.006275ms: missing components: kube-dns
	I1101 10:38:47.341207  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:47.341234  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Running
	I1101 10:38:47.341242  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:47.341248  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:47.341253  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:47.341258  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:47.341262  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:47.341266  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:47.341270  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Running
	I1101 10:38:47.341277  477629 system_pods.go:126] duration metric: took 1.54208615s to wait for k8s-apps to be running ...
	I1101 10:38:47.341284  477629 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:38:47.341341  477629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:38:47.357836  477629 system_svc.go:56] duration metric: took 16.542098ms WaitForService to wait for kubelet
	I1101 10:38:47.357861  477629 kubeadm.go:587] duration metric: took 42.402290232s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:38:47.357880  477629 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:38:47.361122  477629 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:38:47.361194  477629 node_conditions.go:123] node cpu capacity is 2
	I1101 10:38:47.361224  477629 node_conditions.go:105] duration metric: took 3.336874ms to run NodePressure ...
	I1101 10:38:47.361249  477629 start.go:242] waiting for startup goroutines ...
	I1101 10:38:47.361281  477629 start.go:247] waiting for cluster config update ...
	I1101 10:38:47.361311  477629 start.go:256] writing updated cluster config ...
	I1101 10:38:47.361638  477629 ssh_runner.go:195] Run: rm -f paused
	I1101 10:38:47.366602  477629 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:38:47.370670  477629 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h2552" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.376318  477629 pod_ready.go:94] pod "coredns-66bc5c9577-h2552" is "Ready"
	I1101 10:38:47.376383  477629 pod_ready.go:86] duration metric: took 5.693233ms for pod "coredns-66bc5c9577-h2552" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.379098  477629 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.384645  477629 pod_ready.go:94] pod "etcd-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:47.384719  477629 pod_ready.go:86] duration metric: took 5.55184ms for pod "etcd-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.387276  477629 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.392529  477629 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:47.392596  477629 pod_ready.go:86] duration metric: took 5.257927ms for pod "kube-apiserver-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.398622  477629 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.772080  477629 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:47.772159  477629 pod_ready.go:86] duration metric: took 373.468907ms for pod "kube-controller-manager-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.970970  477629 pod_ready.go:83] waiting for pod "kube-proxy-8d8hl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.370819  477629 pod_ready.go:94] pod "kube-proxy-8d8hl" is "Ready"
	I1101 10:38:48.370843  477629 pod_ready.go:86] duration metric: took 399.848762ms for pod "kube-proxy-8d8hl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.571714  477629 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.970379  477629 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:48.970405  477629 pod_ready.go:86] duration metric: took 398.666981ms for pod "kube-scheduler-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.970419  477629 pod_ready.go:40] duration metric: took 1.6037879s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:38:49.073922  477629 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:38:49.077321  477629 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-245904" cluster and "default" namespace by default
	I1101 10:38:48.081663  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:38:48.081803  484563 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:38:48.081886  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:48.113837  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:48.128947  484563 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:38:48.128971  484563 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:38:48.129049  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:48.147928  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:48.164730  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:48.366485  484563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:38:48.378639  484563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:38:48.454208  484563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:38:48.536988  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:38:48.537025  484563 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:38:48.616413  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:38:48.616441  484563 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:38:48.648980  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:38:48.649016  484563 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:38:48.675345  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:38:48.675371  484563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:38:48.701062  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:38:48.701098  484563 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:38:48.726659  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:38:48.726686  484563 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:38:48.748690  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:38:48.748725  484563 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:38:48.783214  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:38:48.783240  484563 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:38:48.801973  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:38:48.802011  484563 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:38:48.831533  484563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:38:53.985493  484563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.61897437s)
	I1101 10:38:53.985554  484563 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.606891973s)
	I1101 10:38:53.985590  484563 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:38:53.985648  484563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:38:53.985750  484563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.531517471s)
	I1101 10:38:53.986054  484563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.154487304s)
	I1101 10:38:53.989599  484563 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-761749 addons enable metrics-server
	
	I1101 10:38:54.014468  484563 api_server.go:72] duration metric: took 6.014925238s to wait for apiserver process to appear ...
	I1101 10:38:54.014490  484563 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:38:54.014509  484563 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:38:54.035042  484563 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:38:54.035077  484563 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:38:54.036343  484563 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 10:38:54.039445  484563 addons.go:515] duration metric: took 6.039496495s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:38:54.514762  484563 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:38:54.523462  484563 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:38:54.524613  484563 api_server.go:141] control plane version: v1.34.1
	I1101 10:38:54.524639  484563 api_server.go:131] duration metric: took 510.141735ms to wait for apiserver health ...
	I1101 10:38:54.524649  484563 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:38:54.528359  484563 system_pods.go:59] 8 kube-system pods found
	I1101 10:38:54.528400  484563 system_pods.go:61] "coredns-66bc5c9577-dkmh7" [4ba29de7-db66-4fb3-a494-f65c332a18fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:38:54.528410  484563 system_pods.go:61] "etcd-newest-cni-761749" [01442f80-7894-4906-bcf2-310262858f81] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:38:54.528417  484563 system_pods.go:61] "kindnet-kj78v" [9e32b217-03e3-4606-a267-3a45809b6648] Running
	I1101 10:38:54.528425  484563 system_pods.go:61] "kube-apiserver-newest-cni-761749" [11f59f30-302f-4408-8088-f1ad8a9151d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:38:54.528432  484563 system_pods.go:61] "kube-controller-manager-newest-cni-761749" [45778566-a6e7-4161-b5e3-ac477859613d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:38:54.528437  484563 system_pods.go:61] "kube-proxy-fzkf5" [865ae218-f581-4914-b55c-fdf4d5134c58] Running
	I1101 10:38:54.528445  484563 system_pods.go:61] "kube-scheduler-newest-cni-761749" [cc737524-4ed5-438e-bc67-e23969166ef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:38:54.528450  484563 system_pods.go:61] "storage-provisioner" [33de256b-6331-467e-96be-298d220b8aa8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:38:54.528456  484563 system_pods.go:74] duration metric: took 3.798642ms to wait for pod list to return data ...
	I1101 10:38:54.528470  484563 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:38:54.531366  484563 default_sa.go:45] found service account: "default"
	I1101 10:38:54.531396  484563 default_sa.go:55] duration metric: took 2.919799ms for default service account to be created ...
	I1101 10:38:54.531409  484563 kubeadm.go:587] duration metric: took 6.531873597s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:38:54.531426  484563 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:38:54.534077  484563 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:38:54.534106  484563 node_conditions.go:123] node cpu capacity is 2
	I1101 10:38:54.534119  484563 node_conditions.go:105] duration metric: took 2.688763ms to run NodePressure ...
	I1101 10:38:54.534132  484563 start.go:242] waiting for startup goroutines ...
	I1101 10:38:54.534139  484563 start.go:247] waiting for cluster config update ...
	I1101 10:38:54.534154  484563 start.go:256] writing updated cluster config ...
	I1101 10:38:54.534454  484563 ssh_runner.go:195] Run: rm -f paused
	I1101 10:38:54.627651  484563 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:38:54.631116  484563 out.go:179] * Done! kubectl is now configured to use "newest-cni-761749" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.488316116Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.494699453Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e87d7fc0-938b-4451-b0bd-2101293e92e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.498309087Z" level=info msg="Running pod sandbox: kube-system/kindnet-kj78v/POD" id=1605b298-334b-4073-b234-af9219f8a87c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.498359533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.500983819Z" level=info msg="Ran pod sandbox 52f7e0cc739402ec244c68208a6fded4c4be4b82d2a2d12de00337b56dc76829 with infra container: kube-system/kube-proxy-fzkf5/POD" id=e87d7fc0-938b-4451-b0bd-2101293e92e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.506280245Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1605b298-334b-4073-b234-af9219f8a87c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.513177652Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=97341aba-3796-4621-b906-3914f8f4a6e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.515486658Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=166e19e1-a081-4749-87bc-0bd23a0682b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.517480156Z" level=info msg="Creating container: kube-system/kube-proxy-fzkf5/kube-proxy" id=64d16fd0-de5d-4fc7-a40c-dddc4b1a2bb5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.517580375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.529868781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.543982239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.55202475Z" level=info msg="Ran pod sandbox 83c5fcc4b2dfc43bb52c0d627385db6dc2ed2c563bc66541e3d29a1ced3597ac with infra container: kube-system/kindnet-kj78v/POD" id=1605b298-334b-4073-b234-af9219f8a87c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.559327211Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=52c57e82-7613-4a53-adad-a56943c78757 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.562954001Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=71ef3feb-f9ce-4213-b2dc-3d448faea730 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.566090075Z" level=info msg="Creating container: kube-system/kindnet-kj78v/kindnet-cni" id=0d460fcf-c9b4-4957-8767-6a6933171575 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.566187324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.588812676Z" level=info msg="Created container a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4: kube-system/kube-proxy-fzkf5/kube-proxy" id=64d16fd0-de5d-4fc7-a40c-dddc4b1a2bb5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.592260756Z" level=info msg="Starting container: a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4" id=c7d0ac1a-c83a-4939-9e59-52c0411d4c0f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.598140946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.598847109Z" level=info msg="Started container" PID=1056 containerID=a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4 description=kube-system/kube-proxy-fzkf5/kube-proxy id=c7d0ac1a-c83a-4939-9e59-52c0411d4c0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=52f7e0cc739402ec244c68208a6fded4c4be4b82d2a2d12de00337b56dc76829
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.60309019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.645119328Z" level=info msg="Created container 7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8: kube-system/kindnet-kj78v/kindnet-cni" id=0d460fcf-c9b4-4957-8767-6a6933171575 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.647395021Z" level=info msg="Starting container: 7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8" id=58f78873-703c-49f5-9648-2218154ffbe3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:38:53 newest-cni-761749 crio[611]: time="2025-11-01T10:38:53.649613556Z" level=info msg="Started container" PID=1067 containerID=7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8 description=kube-system/kindnet-kj78v/kindnet-cni id=58f78873-703c-49f5-9648-2218154ffbe3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=83c5fcc4b2dfc43bb52c0d627385db6dc2ed2c563bc66541e3d29a1ced3597ac
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7d39017947fe9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   83c5fcc4b2dfc       kindnet-kj78v                               kube-system
	a69ca386ee8c1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   52f7e0cc73940       kube-proxy-fzkf5                            kube-system
	de93a1f63a9d3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   43a348a682329       kube-scheduler-newest-cni-761749            kube-system
	414d6f893c68b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   41841303ecb65       kube-controller-manager-newest-cni-761749   kube-system
	b9f553ff34209       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   9b2f7b0415eaa       etcd-newest-cni-761749                      kube-system
	8e311efa9f61f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   189069b9f7c53       kube-apiserver-newest-cni-761749            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-761749
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-761749
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=newest-cni-761749
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_38_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:38:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-761749
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:38:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:38:52 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:38:52 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:38:52 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:38:52 +0000   Sat, 01 Nov 2025 10:38:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-761749
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d919014c-b008-45f7-b1e1-0de245f57299
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-761749                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-kj78v                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-761749             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-761749    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-fzkf5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-761749             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 27s   kube-proxy       
	  Normal   Starting                 7s    kube-proxy       
	  Normal   Starting                 33s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s   kubelet          Node newest-cni-761749 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s   kubelet          Node newest-cni-761749 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s   kubelet          Node newest-cni-761749 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s   node-controller  Node newest-cni-761749 event: Registered Node newest-cni-761749 in Controller
	  Normal   RegisteredNode           5s    node-controller  Node newest-cni-761749 event: Registered Node newest-cni-761749 in Controller
	
	
	==> dmesg <==
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	[ +26.122524] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b9f553ff342098fd441b42aa1e52310fae9a2b1952ea819220331db38af305bf] <==
	{"level":"warn","ts":"2025-11-01T10:38:50.632536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.657191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.686204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.703479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.714797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.733776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.750590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.769446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.787204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.827694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.856232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.878364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.908071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.929505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.950175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:50.977296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.001933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.020533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.031840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.055697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.075179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.101897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.158371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.194284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:38:51.254376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53208","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:39:01 up  2:21,  0 user,  load average: 3.68, 4.02, 3.28
	Linux newest-cni-761749 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d39017947fe9056923cff80d0d40b232338785f6905a5ddebcbfb674ea0d2b8] <==
	I1101 10:38:53.864936       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:38:53.870673       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:38:53.870890       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:38:53.870947       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:38:53.870988       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:38:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:38:54.044087       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:38:54.046148       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:38:54.046264       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:38:54.047234       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [8e311efa9f61ff9f631155480b75fb70507dd1cd49a022969169b03774e7d150] <==
	I1101 10:38:52.632947       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:38:52.641994       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:38:52.642016       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:38:52.642023       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:38:52.642031       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:38:52.642184       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:38:52.642208       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:38:52.642255       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:38:52.642287       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:38:52.642292       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:38:52.645399       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:38:52.647922       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:38:52.693827       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:38:53.234684       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:38:53.330086       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:38:53.413815       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:38:53.534454       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:38:53.626718       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:38:53.665919       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:38:53.915525       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.244.247"}
	I1101 10:38:53.942541       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.139.122"}
	I1101 10:38:56.244830       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:38:56.423345       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:38:56.474354       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:38:56.632458       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [414d6f893c68b755fc729b16f2cd8b4e936d00bdbbb7ae6fafe5a9d7fda62635] <==
	I1101 10:38:56.068603       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:38:56.072823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:38:56.073218       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:38:56.075001       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:38:56.075105       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:38:56.079448       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:38:56.082731       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:38:56.089601       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:38:56.093119       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:38:56.097444       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:38:56.102796       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:38:56.107228       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:38:56.108845       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:38:56.119241       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:38:56.123602       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:38:56.131959       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:38:56.151150       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:38:56.153835       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:38:56.153902       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:38:56.159809       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:38:56.167209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:38:56.167239       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:38:56.167248       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:38:56.173833       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:38:56.176130       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [a69ca386ee8c1b6c401a82ff1ca20f1473ea2e1e1c543026317e5d0a70d285a4] <==
	I1101 10:38:54.154763       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:38:54.231351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:38:54.243899       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:38:54.243996       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:38:54.244085       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:38:54.278686       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:38:54.278811       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:38:54.298120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:38:54.298636       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:38:54.298663       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:38:54.330999       1 config.go:200] "Starting service config controller"
	I1101 10:38:54.331077       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:38:54.331125       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:38:54.331153       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:38:54.331187       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:38:54.331214       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:38:54.332833       1 config.go:309] "Starting node config controller"
	I1101 10:38:54.334300       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:38:54.334356       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:38:54.432054       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:38:54.432161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:38:54.432175       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [de93a1f63a9d3c4fe900f5766c8143f4f0cfc5c264276ad60ac51ab1a84988d3] <==
	I1101 10:38:54.063908       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:38:56.142148       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:38:56.142256       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:38:56.149804       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:38:56.149942       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:38:56.149991       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:38:56.150039       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:38:56.179156       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:38:56.179262       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:38:56.179521       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:38:56.179581       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:38:56.250362       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:38:56.280801       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:38:56.281017       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.447447     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.477254     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.666395     727 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.666518     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.666547     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.667520     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: E1101 10:38:52.681787     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-761749\" already exists" pod="kube-system/kube-controller-manager-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.681820     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: E1101 10:38:52.699846     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-761749\" already exists" pod="kube-system/kube-scheduler-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: E1101 10:38:52.712488     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-761749\" already exists" pod="kube-system/kube-scheduler-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.712524     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: E1101 10:38:52.728450     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-761749\" already exists" pod="kube-system/etcd-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: I1101 10:38:52.728486     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-761749"
	Nov 01 10:38:52 newest-cni-761749 kubelet[727]: E1101 10:38:52.741038     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-761749\" already exists" pod="kube-system/kube-apiserver-newest-cni-761749"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.164597     727 apiserver.go:52] "Watching apiserver"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.277706     727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.294185     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e32b217-03e3-4606-a267-3a45809b6648-xtables-lock\") pod \"kindnet-kj78v\" (UID: \"9e32b217-03e3-4606-a267-3a45809b6648\") " pod="kube-system/kindnet-kj78v"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.294267     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/865ae218-f581-4914-b55c-fdf4d5134c58-lib-modules\") pod \"kube-proxy-fzkf5\" (UID: \"865ae218-f581-4914-b55c-fdf4d5134c58\") " pod="kube-system/kube-proxy-fzkf5"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.294342     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e32b217-03e3-4606-a267-3a45809b6648-lib-modules\") pod \"kindnet-kj78v\" (UID: \"9e32b217-03e3-4606-a267-3a45809b6648\") " pod="kube-system/kindnet-kj78v"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.294364     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/865ae218-f581-4914-b55c-fdf4d5134c58-xtables-lock\") pod \"kube-proxy-fzkf5\" (UID: \"865ae218-f581-4914-b55c-fdf4d5134c58\") " pod="kube-system/kube-proxy-fzkf5"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.294385     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9e32b217-03e3-4606-a267-3a45809b6648-cni-cfg\") pod \"kindnet-kj78v\" (UID: \"9e32b217-03e3-4606-a267-3a45809b6648\") " pod="kube-system/kindnet-kj78v"
	Nov 01 10:38:53 newest-cni-761749 kubelet[727]: I1101 10:38:53.350845     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:38:56 newest-cni-761749 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:38:56 newest-cni-761749 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:38:56 newest-cni-761749 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-761749 -n newest-cni-761749
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-761749 -n newest-cni-761749: exit status 2 (466.344576ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-761749 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-dkmh7 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bpz2x kubernetes-dashboard-855c9754f9-xknbx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-761749 describe pod coredns-66bc5c9577-dkmh7 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bpz2x kubernetes-dashboard-855c9754f9-xknbx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-761749 describe pod coredns-66bc5c9577-dkmh7 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bpz2x kubernetes-dashboard-855c9754f9-xknbx: exit status 1 (87.01541ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-dkmh7" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-bpz2x" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xknbx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-761749 describe pod coredns-66bc5c9577-dkmh7 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bpz2x kubernetes-dashboard-855c9754f9-xknbx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-245904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-245904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (303.482481ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:38:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-245904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-245904 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-245904 describe deploy/metrics-server -n kube-system: exit status 1 (99.263113ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-245904 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-245904
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-245904:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e",
	        "Created": "2025-11-01T10:37:31.035014069Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:37:31.114097729Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/hosts",
	        "LogPath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e-json.log",
	        "Name": "/default-k8s-diff-port-245904",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-245904:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-245904",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e",
	                "LowerDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-245904",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-245904/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-245904",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-245904",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-245904",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "75f4e66a9f5c1739d0c17124f1586d3fb85ce6e0c3a0d340186f824ee7098504",
	            "SandboxKey": "/var/run/docker/netns/75f4e66a9f5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-245904": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:0a:d4:e1:d2:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ca453ec076d50791763a6c741bc9e74267d64bf587acdd7076e49fdbf14831b1",
	                    "EndpointID": "23c0972cf670bda7cf63caea491fc69f8ebd16e9bc2f2eb518ed98d8facbdff1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-245904",
	                        "a7be6b4a2a88"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-245904 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-245904 logs -n 25: (1.934699691s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-170467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-170467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-618070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ stop    │ -p embed-certs-618070 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-618070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:37 UTC │
	│ image   │ no-preload-170467 image list --format=json                                                                                                                                                                                                    │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p no-preload-170467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p disable-driver-mounts-416512                                                                                                                                                                                                               │ disable-driver-mounts-416512 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ image   │ embed-certs-618070 image list --format=json                                                                                                                                                                                                   │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-618070 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p newest-cni-761749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ stop    │ -p newest-cni-761749 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-761749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ image   │ newest-cni-761749 image list --format=json                                                                                                                                                                                                    │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-761749 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-245904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:38:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:38:39.399270  484563 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:38:39.399466  484563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:38:39.399495  484563 out.go:374] Setting ErrFile to fd 2...
	I1101 10:38:39.399516  484563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:38:39.399817  484563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:38:39.400255  484563 out.go:368] Setting JSON to false
	I1101 10:38:39.401281  484563 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8469,"bootTime":1761985051,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:38:39.401381  484563 start.go:143] virtualization:  
	I1101 10:38:39.406465  484563 out.go:179] * [newest-cni-761749] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:38:39.409727  484563 notify.go:221] Checking for updates...
	I1101 10:38:39.410613  484563 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:38:39.413803  484563 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:38:39.416769  484563 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:39.419641  484563 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:38:39.422574  484563 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:38:39.425407  484563 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:38:39.429341  484563 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:39.429959  484563 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:38:39.463477  484563 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:38:39.463592  484563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:38:39.521776  484563 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:38:39.511848177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:38:39.521889  484563 docker.go:319] overlay module found
	I1101 10:38:39.525055  484563 out.go:179] * Using the docker driver based on existing profile
	I1101 10:38:39.527855  484563 start.go:309] selected driver: docker
	I1101 10:38:39.527878  484563 start.go:930] validating driver "docker" against &{Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:38:39.527989  484563 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:38:39.528718  484563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:38:39.590420  484563 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 10:38:39.581000966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:38:39.590774  484563 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:38:39.590815  484563 cni.go:84] Creating CNI manager for ""
	I1101 10:38:39.590877  484563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:38:39.590958  484563 start.go:353] cluster config:
	{Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:38:39.595980  484563 out.go:179] * Starting "newest-cni-761749" primary control-plane node in "newest-cni-761749" cluster
	I1101 10:38:39.598792  484563 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:38:39.601801  484563 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:38:39.604630  484563 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:38:39.604684  484563 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:38:39.604697  484563 cache.go:59] Caching tarball of preloaded images
	I1101 10:38:39.604726  484563 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:38:39.604800  484563 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:38:39.604811  484563 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:38:39.604967  484563 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/config.json ...
	I1101 10:38:39.623857  484563 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:38:39.623882  484563 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:38:39.623895  484563 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:38:39.623917  484563 start.go:360] acquireMachinesLock for newest-cni-761749: {Name:mkbbc8f02c65f1e3740f70e3b6e44f341f2e91e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:38:39.623975  484563 start.go:364] duration metric: took 35.488µs to acquireMachinesLock for "newest-cni-761749"
	I1101 10:38:39.623998  484563 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:38:39.624007  484563 fix.go:54] fixHost starting: 
	I1101 10:38:39.624350  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:39.642020  484563 fix.go:112] recreateIfNeeded on newest-cni-761749: state=Stopped err=<nil>
	W1101 10:38:39.642047  484563 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:38:35.732258  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:38.231755  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:40.232491  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:39.645338  484563 out.go:252] * Restarting existing docker container for "newest-cni-761749" ...
	I1101 10:38:39.645430  484563 cli_runner.go:164] Run: docker start newest-cni-761749
	I1101 10:38:39.921415  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:39.945905  484563 kic.go:430] container "newest-cni-761749" state is running.
	I1101 10:38:39.946279  484563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:39.969820  484563 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/config.json ...
	I1101 10:38:39.970049  484563 machine.go:94] provisionDockerMachine start ...
	I1101 10:38:39.970109  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:39.991857  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:39.992553  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:39.992582  484563 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:38:39.994771  484563 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39040->127.0.0.1:33450: read: connection reset by peer
	I1101 10:38:43.149549  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-761749
	
	I1101 10:38:43.149575  484563 ubuntu.go:182] provisioning hostname "newest-cni-761749"
	I1101 10:38:43.149664  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:43.172331  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:43.172643  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:43.172660  484563 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-761749 && echo "newest-cni-761749" | sudo tee /etc/hostname
	I1101 10:38:43.336353  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-761749
	
	I1101 10:38:43.336479  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:43.356550  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:43.356862  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:43.356878  484563 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-761749' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-761749/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-761749' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:38:43.510375  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:38:43.510404  484563 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:38:43.510428  484563 ubuntu.go:190] setting up certificates
	I1101 10:38:43.510446  484563 provision.go:84] configureAuth start
	I1101 10:38:43.510522  484563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:43.528959  484563 provision.go:143] copyHostCerts
	I1101 10:38:43.529048  484563 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:38:43.529068  484563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:38:43.529166  484563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:38:43.529286  484563 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:38:43.529299  484563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:38:43.529333  484563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:38:43.529426  484563 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:38:43.529438  484563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:38:43.529479  484563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:38:43.529552  484563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.newest-cni-761749 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-761749]
	I1101 10:38:44.113512  484563 provision.go:177] copyRemoteCerts
	I1101 10:38:44.113610  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:38:44.113675  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.131710  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.238514  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:38:44.256546  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:38:44.275881  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:38:44.303245  484563 provision.go:87] duration metric: took 792.773225ms to configureAuth
	I1101 10:38:44.303272  484563 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:38:44.303483  484563 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:44.303590  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.322212  484563 main.go:143] libmachine: Using SSH client type: native
	I1101 10:38:44.322526  484563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1101 10:38:44.322546  484563 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1101 10:38:42.732191  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	W1101 10:38:45.234071  477629 node_ready.go:57] node "default-k8s-diff-port-245904" has "Ready":"False" status (will retry)
	I1101 10:38:44.622724  484563 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:38:44.622790  484563 machine.go:97] duration metric: took 4.652732465s to provisionDockerMachine
	I1101 10:38:44.622806  484563 start.go:293] postStartSetup for "newest-cni-761749" (driver="docker")
	I1101 10:38:44.622817  484563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:38:44.622913  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:38:44.622958  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.642026  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.750354  484563 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:38:44.754548  484563 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:38:44.754576  484563 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:38:44.754587  484563 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:38:44.754647  484563 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:38:44.754735  484563 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:38:44.754839  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:38:44.763939  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:38:44.784636  484563 start.go:296] duration metric: took 161.813814ms for postStartSetup
	I1101 10:38:44.784735  484563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:38:44.784780  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.802216  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.908521  484563 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:38:44.913394  484563 fix.go:56] duration metric: took 5.289380019s for fixHost
	I1101 10:38:44.913418  484563 start.go:83] releasing machines lock for "newest-cni-761749", held for 5.289429826s
	I1101 10:38:44.913498  484563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-761749
	I1101 10:38:44.930512  484563 ssh_runner.go:195] Run: cat /version.json
	I1101 10:38:44.930572  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.930871  484563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:38:44.930937  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:44.950674  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:44.963879  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:45.252302  484563 ssh_runner.go:195] Run: systemctl --version
	I1101 10:38:45.262558  484563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:38:45.336279  484563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:38:45.342222  484563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:38:45.342299  484563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:38:45.352867  484563 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:38:45.352908  484563 start.go:496] detecting cgroup driver to use...
	I1101 10:38:45.352976  484563 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:38:45.353057  484563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:38:45.373242  484563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:38:45.394512  484563 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:38:45.394588  484563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:38:45.417203  484563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:38:45.438615  484563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:38:45.590047  484563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:38:45.743848  484563 docker.go:234] disabling docker service ...
	I1101 10:38:45.744008  484563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:38:45.767987  484563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:38:45.782983  484563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:38:45.962814  484563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:38:46.126117  484563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:38:46.149830  484563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:38:46.179257  484563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:38:46.179336  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.196552  484563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:38:46.196638  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.206305  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.216430  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.226931  484563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:38:46.237000  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.246883  484563 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.256657  484563 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:38:46.267521  484563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:38:46.275521  484563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:38:46.282901  484563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:46.407343  484563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:38:46.530548  484563 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:38:46.530664  484563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:38:46.534806  484563 start.go:564] Will wait 60s for crictl version
	I1101 10:38:46.534901  484563 ssh_runner.go:195] Run: which crictl
	I1101 10:38:46.538455  484563 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:38:46.563072  484563 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:38:46.563175  484563 ssh_runner.go:195] Run: crio --version
	I1101 10:38:46.591515  484563 ssh_runner.go:195] Run: crio --version
	I1101 10:38:46.624776  484563 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:38:46.627856  484563 cli_runner.go:164] Run: docker network inspect newest-cni-761749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:38:46.644268  484563 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:38:46.648330  484563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:38:46.666545  484563 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:38:46.669479  484563 kubeadm.go:884] updating cluster {Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:38:46.669612  484563 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:38:46.669722  484563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:38:46.704158  484563 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:38:46.704186  484563 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:38:46.704246  484563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:38:46.730433  484563 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:38:46.730458  484563 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:38:46.730467  484563 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:38:46.730570  484563 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-761749 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:38:46.730659  484563 ssh_runner.go:195] Run: crio config
	I1101 10:38:46.822263  484563 cni.go:84] Creating CNI manager for ""
	I1101 10:38:46.822297  484563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:38:46.822310  484563 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:38:46.822335  484563 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-761749 NodeName:newest-cni-761749 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:38:46.822479  484563 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-761749"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:38:46.822563  484563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:38:46.835713  484563 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:38:46.835822  484563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:38:46.846259  484563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:38:46.861049  484563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:38:46.874167  484563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 10:38:46.887234  484563 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:38:46.891045  484563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:38:46.901353  484563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:47.023234  484563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:38:47.044268  484563 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749 for IP: 192.168.85.2
	I1101 10:38:47.044300  484563 certs.go:195] generating shared ca certs ...
	I1101 10:38:47.044338  484563 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.044559  484563 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:38:47.044631  484563 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:38:47.044645  484563 certs.go:257] generating profile certs ...
	I1101 10:38:47.044758  484563 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/client.key
	I1101 10:38:47.044870  484563 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key.6f5a246d
	I1101 10:38:47.044947  484563 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.key
	I1101 10:38:47.045096  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:38:47.045158  484563 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:38:47.045175  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:38:47.045226  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:38:47.045270  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:38:47.045329  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:38:47.045397  484563 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:38:47.046200  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:38:47.064415  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:38:47.081836  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:38:47.099624  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:38:47.117450  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:38:47.136819  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:38:47.160266  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:38:47.190759  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/newest-cni-761749/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:38:47.212958  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:38:47.240289  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:38:47.265449  484563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:38:47.285751  484563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:38:47.308157  484563 ssh_runner.go:195] Run: openssl version
	I1101 10:38:47.314558  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:38:47.324099  484563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:47.328030  484563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:47.328148  484563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:38:47.391207  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:38:47.401746  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:38:47.410355  484563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:38:47.414312  484563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:38:47.414374  484563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:38:47.456845  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:38:47.465162  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:38:47.473840  484563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:38:47.478063  484563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:38:47.478184  484563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:38:47.519316  484563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:38:47.527779  484563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:38:47.531750  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:38:47.577023  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:38:47.620010  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:38:47.663101  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:38:47.713848  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:38:47.765080  484563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:38:47.819911  484563 kubeadm.go:401] StartCluster: {Name:newest-cni-761749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-761749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:38:47.820050  484563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:38:47.820146  484563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:38:47.922806  484563 cri.go:89] found id: "de93a1f63a9d3c4fe900f5766c8143f4f0cfc5c264276ad60ac51ab1a84988d3"
	I1101 10:38:47.922871  484563 cri.go:89] found id: "414d6f893c68b755fc729b16f2cd8b4e936d00bdbbb7ae6fafe5a9d7fda62635"
	I1101 10:38:47.922902  484563 cri.go:89] found id: "b9f553ff342098fd441b42aa1e52310fae9a2b1952ea819220331db38af305bf"
	I1101 10:38:47.922985  484563 cri.go:89] found id: "8e311efa9f61ff9f631155480b75fb70507dd1cd49a022969169b03774e7d150"
	I1101 10:38:47.923009  484563 cri.go:89] found id: ""
	I1101 10:38:47.923078  484563 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:38:47.947198  484563 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:38:47Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:38:47.947337  484563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:38:47.959160  484563 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:38:47.959231  484563 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:38:47.959303  484563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:38:47.975928  484563 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:38:47.976553  484563 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-761749" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:47.976847  484563 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-761749" cluster setting kubeconfig missing "newest-cni-761749" context setting]
	I1101 10:38:47.977318  484563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.978789  484563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:38:47.998019  484563 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:38:47.998092  484563 kubeadm.go:602] duration metric: took 38.840815ms to restartPrimaryControlPlane
	I1101 10:38:47.998118  484563 kubeadm.go:403] duration metric: took 178.215958ms to StartCluster
	I1101 10:38:47.998147  484563 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.998232  484563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:38:47.999204  484563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:38:47.999476  484563 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:38:47.999852  484563 config.go:182] Loaded profile config "newest-cni-761749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:38:47.999926  484563 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:38:48.000005  484563 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-761749"
	I1101 10:38:48.000021  484563 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-761749"
	W1101 10:38:48.000028  484563 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:38:48.000051  484563 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:48.000545  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.000954  484563 addons.go:70] Setting dashboard=true in profile "newest-cni-761749"
	I1101 10:38:48.000998  484563 addons.go:239] Setting addon dashboard=true in "newest-cni-761749"
	W1101 10:38:48.001030  484563 addons.go:248] addon dashboard should already be in state true
	I1101 10:38:48.001077  484563 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:48.001215  484563 addons.go:70] Setting default-storageclass=true in profile "newest-cni-761749"
	I1101 10:38:48.001230  484563 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-761749"
	I1101 10:38:48.001509  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.002410  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.008079  484563 out.go:179] * Verifying Kubernetes components...
	I1101 10:38:48.013179  484563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:38:48.055991  484563 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:38:48.058957  484563 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:38:48.058982  484563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:38:48.059053  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:48.070894  484563 addons.go:239] Setting addon default-storageclass=true in "newest-cni-761749"
	W1101 10:38:48.070919  484563 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:38:48.070946  484563 host.go:66] Checking if "newest-cni-761749" exists ...
	I1101 10:38:48.071361  484563 cli_runner.go:164] Run: docker container inspect newest-cni-761749 --format={{.State.Status}}
	I1101 10:38:48.075882  484563 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:38:48.078821  484563 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:38:45.732277  477629 node_ready.go:49] node "default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:45.732306  477629 node_ready.go:38] duration metric: took 39.504103123s for node "default-k8s-diff-port-245904" to be "Ready" ...
	I1101 10:38:45.732320  477629 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:38:45.732374  477629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:38:45.762283  477629 api_server.go:72] duration metric: took 40.806706118s to wait for apiserver process to appear ...
	I1101 10:38:45.762306  477629 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:38:45.762336  477629 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:38:45.773094  477629 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 10:38:45.778354  477629 api_server.go:141] control plane version: v1.34.1
	I1101 10:38:45.778380  477629 api_server.go:131] duration metric: took 16.066881ms to wait for apiserver health ...
	I1101 10:38:45.778389  477629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:38:45.788072  477629 system_pods.go:59] 8 kube-system pods found
	I1101 10:38:45.788132  477629 system_pods.go:61] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:45.788140  477629 system_pods.go:61] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:45.788146  477629 system_pods.go:61] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:45.788150  477629 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:45.788155  477629 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:45.788173  477629 system_pods.go:61] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:45.788177  477629 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:45.788184  477629 system_pods.go:61] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:45.788191  477629 system_pods.go:74] duration metric: took 9.797424ms to wait for pod list to return data ...
	I1101 10:38:45.788206  477629 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:38:45.799144  477629 default_sa.go:45] found service account: "default"
	I1101 10:38:45.799169  477629 default_sa.go:55] duration metric: took 10.95587ms for default service account to be created ...
	I1101 10:38:45.799185  477629 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:38:45.807183  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:45.807214  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:45.807221  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:45.807229  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:45.807234  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:45.807239  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:45.807243  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:45.807247  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:45.807252  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:45.807274  477629 retry.go:31] will retry after 310.68281ms: missing components: kube-dns
	I1101 10:38:46.136392  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:46.136430  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:46.136437  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:46.136446  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:46.136450  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:46.136454  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:46.136458  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:46.136463  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:46.136469  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:46.136487  477629 retry.go:31] will retry after 306.636472ms: missing components: kube-dns
	I1101 10:38:46.447474  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:46.447510  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:46.447517  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:46.447524  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:46.447529  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:46.447533  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:46.447537  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:46.447542  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:46.447548  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:46.447561  477629 retry.go:31] will retry after 319.925041ms: missing components: kube-dns
	I1101 10:38:46.772305  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:46.772339  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:38:46.772347  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:46.772353  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:46.772357  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:46.772361  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:46.772365  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:46.772369  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:46.772375  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:38:46.772389  477629 retry.go:31] will retry after 564.006275ms: missing components: kube-dns
	I1101 10:38:47.341207  477629 system_pods.go:86] 8 kube-system pods found
	I1101 10:38:47.341234  477629 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Running
	I1101 10:38:47.341242  477629 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running
	I1101 10:38:47.341248  477629 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:38:47.341253  477629 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running
	I1101 10:38:47.341258  477629 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:38:47.341262  477629 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:38:47.341266  477629 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running
	I1101 10:38:47.341270  477629 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Running
	I1101 10:38:47.341277  477629 system_pods.go:126] duration metric: took 1.54208615s to wait for k8s-apps to be running ...
	I1101 10:38:47.341284  477629 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:38:47.341341  477629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:38:47.357836  477629 system_svc.go:56] duration metric: took 16.542098ms WaitForService to wait for kubelet
	I1101 10:38:47.357861  477629 kubeadm.go:587] duration metric: took 42.402290232s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:38:47.357880  477629 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:38:47.361122  477629 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:38:47.361194  477629 node_conditions.go:123] node cpu capacity is 2
	I1101 10:38:47.361224  477629 node_conditions.go:105] duration metric: took 3.336874ms to run NodePressure ...
	I1101 10:38:47.361249  477629 start.go:242] waiting for startup goroutines ...
	I1101 10:38:47.361281  477629 start.go:247] waiting for cluster config update ...
	I1101 10:38:47.361311  477629 start.go:256] writing updated cluster config ...
	I1101 10:38:47.361638  477629 ssh_runner.go:195] Run: rm -f paused
	I1101 10:38:47.366602  477629 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:38:47.370670  477629 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h2552" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.376318  477629 pod_ready.go:94] pod "coredns-66bc5c9577-h2552" is "Ready"
	I1101 10:38:47.376383  477629 pod_ready.go:86] duration metric: took 5.693233ms for pod "coredns-66bc5c9577-h2552" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.379098  477629 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.384645  477629 pod_ready.go:94] pod "etcd-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:47.384719  477629 pod_ready.go:86] duration metric: took 5.55184ms for pod "etcd-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.387276  477629 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.392529  477629 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:47.392596  477629 pod_ready.go:86] duration metric: took 5.257927ms for pod "kube-apiserver-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.398622  477629 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.772080  477629 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:47.772159  477629 pod_ready.go:86] duration metric: took 373.468907ms for pod "kube-controller-manager-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:47.970970  477629 pod_ready.go:83] waiting for pod "kube-proxy-8d8hl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.370819  477629 pod_ready.go:94] pod "kube-proxy-8d8hl" is "Ready"
	I1101 10:38:48.370843  477629 pod_ready.go:86] duration metric: took 399.848762ms for pod "kube-proxy-8d8hl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.571714  477629 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.970379  477629 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-245904" is "Ready"
	I1101 10:38:48.970405  477629 pod_ready.go:86] duration metric: took 398.666981ms for pod "kube-scheduler-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:38:48.970419  477629 pod_ready.go:40] duration metric: took 1.6037879s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:38:49.073922  477629 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:38:49.077321  477629 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-245904" cluster and "default" namespace by default
	I1101 10:38:48.081663  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:38:48.081803  484563 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:38:48.081886  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:48.113837  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:48.128947  484563 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:38:48.128971  484563 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:38:48.129049  484563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-761749
	I1101 10:38:48.147928  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:48.164730  484563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/newest-cni-761749/id_rsa Username:docker}
	I1101 10:38:48.366485  484563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:38:48.378639  484563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:38:48.454208  484563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:38:48.536988  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:38:48.537025  484563 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:38:48.616413  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:38:48.616441  484563 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:38:48.648980  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:38:48.649016  484563 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:38:48.675345  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:38:48.675371  484563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:38:48.701062  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:38:48.701098  484563 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:38:48.726659  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:38:48.726686  484563 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:38:48.748690  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:38:48.748725  484563 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:38:48.783214  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:38:48.783240  484563 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:38:48.801973  484563 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:38:48.802011  484563 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:38:48.831533  484563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:38:53.985493  484563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.61897437s)
	I1101 10:38:53.985554  484563 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.606891973s)
	I1101 10:38:53.985590  484563 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:38:53.985648  484563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:38:53.985750  484563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.531517471s)
	I1101 10:38:53.986054  484563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.154487304s)
	I1101 10:38:53.989599  484563 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-761749 addons enable metrics-server
	
	I1101 10:38:54.014468  484563 api_server.go:72] duration metric: took 6.014925238s to wait for apiserver process to appear ...
	I1101 10:38:54.014490  484563 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:38:54.014509  484563 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:38:54.035042  484563 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:38:54.035077  484563 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:38:54.036343  484563 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 10:38:54.039445  484563 addons.go:515] duration metric: took 6.039496495s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:38:54.514762  484563 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:38:54.523462  484563 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:38:54.524613  484563 api_server.go:141] control plane version: v1.34.1
	I1101 10:38:54.524639  484563 api_server.go:131] duration metric: took 510.141735ms to wait for apiserver health ...
	I1101 10:38:54.524649  484563 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:38:54.528359  484563 system_pods.go:59] 8 kube-system pods found
	I1101 10:38:54.528400  484563 system_pods.go:61] "coredns-66bc5c9577-dkmh7" [4ba29de7-db66-4fb3-a494-f65c332a18fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:38:54.528410  484563 system_pods.go:61] "etcd-newest-cni-761749" [01442f80-7894-4906-bcf2-310262858f81] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:38:54.528417  484563 system_pods.go:61] "kindnet-kj78v" [9e32b217-03e3-4606-a267-3a45809b6648] Running
	I1101 10:38:54.528425  484563 system_pods.go:61] "kube-apiserver-newest-cni-761749" [11f59f30-302f-4408-8088-f1ad8a9151d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:38:54.528432  484563 system_pods.go:61] "kube-controller-manager-newest-cni-761749" [45778566-a6e7-4161-b5e3-ac477859613d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:38:54.528437  484563 system_pods.go:61] "kube-proxy-fzkf5" [865ae218-f581-4914-b55c-fdf4d5134c58] Running
	I1101 10:38:54.528445  484563 system_pods.go:61] "kube-scheduler-newest-cni-761749" [cc737524-4ed5-438e-bc67-e23969166ef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:38:54.528450  484563 system_pods.go:61] "storage-provisioner" [33de256b-6331-467e-96be-298d220b8aa8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:38:54.528456  484563 system_pods.go:74] duration metric: took 3.798642ms to wait for pod list to return data ...
	I1101 10:38:54.528470  484563 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:38:54.531366  484563 default_sa.go:45] found service account: "default"
	I1101 10:38:54.531396  484563 default_sa.go:55] duration metric: took 2.919799ms for default service account to be created ...
	I1101 10:38:54.531409  484563 kubeadm.go:587] duration metric: took 6.531873597s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:38:54.531426  484563 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:38:54.534077  484563 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:38:54.534106  484563 node_conditions.go:123] node cpu capacity is 2
	I1101 10:38:54.534119  484563 node_conditions.go:105] duration metric: took 2.688763ms to run NodePressure ...
	I1101 10:38:54.534132  484563 start.go:242] waiting for startup goroutines ...
	I1101 10:38:54.534139  484563 start.go:247] waiting for cluster config update ...
	I1101 10:38:54.534154  484563 start.go:256] writing updated cluster config ...
	I1101 10:38:54.534454  484563 ssh_runner.go:195] Run: rm -f paused
	I1101 10:38:54.627651  484563 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:38:54.631116  484563 out.go:179] * Done! kubectl is now configured to use "newest-cni-761749" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:38:45 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:45.921263882Z" level=info msg="Created container 176584836f9c71b6b51be31214cf6c11e6d336da0feddbb905623540af784d1d: kube-system/coredns-66bc5c9577-h2552/coredns" id=ecbb17c1-1561-456c-bbd6-52ae939db554 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:45 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:45.934575996Z" level=info msg="Starting container: 176584836f9c71b6b51be31214cf6c11e6d336da0feddbb905623540af784d1d" id=05894153-a52e-41b6-bfa1-b219035f69e1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:38:45 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:45.943534168Z" level=info msg="Started container" PID=1719 containerID=176584836f9c71b6b51be31214cf6c11e6d336da0feddbb905623540af784d1d description=kube-system/coredns-66bc5c9577-h2552/coredns id=05894153-a52e-41b6-bfa1-b219035f69e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=99e186cdf04d3549351e8f9961648086b30469f1e52945a64540926ff9f68e11
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.696381207Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0c408628-6009-45e6-89b2-0a0f601c0d58 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.696459477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.709363789Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fa78ae213259124337cf5ba292565f5955117a9df30b806a714d19428c66ba9d UID:449ee4be-9b51-4739-a427-f668f7aa9729 NetNS:/var/run/netns/b767c0c5-0ab2-4fbc-ae92-d28379891c5c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b140}] Aliases:map[]}"
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.709406842Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.725160765Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fa78ae213259124337cf5ba292565f5955117a9df30b806a714d19428c66ba9d UID:449ee4be-9b51-4739-a427-f668f7aa9729 NetNS:/var/run/netns/b767c0c5-0ab2-4fbc-ae92-d28379891c5c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b140}] Aliases:map[]}"
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.725312801Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.735319492Z" level=info msg="Ran pod sandbox fa78ae213259124337cf5ba292565f5955117a9df30b806a714d19428c66ba9d with infra container: default/busybox/POD" id=0c408628-6009-45e6-89b2-0a0f601c0d58 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.736561156Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fca5a6fb-9777-4835-baaf-7ba06e32f45c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.736856349Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=fca5a6fb-9777-4835-baaf-7ba06e32f45c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.737046564Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=fca5a6fb-9777-4835-baaf-7ba06e32f45c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.740323991Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5a87f8d5-801e-4cd9-907c-7ee8c68f20cd name=/runtime.v1.ImageService/PullImage
	Nov 01 10:38:49 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:49.748925897Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:38:51 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:51.996592879Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5a87f8d5-801e-4cd9-907c-7ee8c68f20cd name=/runtime.v1.ImageService/PullImage
	Nov 01 10:38:51 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:51.998054428Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8cdad3b-0ab0-41ec-8a83-25ad7b08dfc8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:52 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:52.008338638Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b0f20f5b-076b-4b7e-9a18-f788e5a959a2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:38:52 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:52.022820004Z" level=info msg="Creating container: default/busybox/busybox" id=ea573c53-d6d9-4021-83a8-7ae36038128f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:52 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:52.023138048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:52 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:52.032615811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:52 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:52.034030901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:38:52 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:52.064494633Z" level=info msg="Created container 897182a41035485dffc2a8a0b8542a2bfd5b3bf66b06267564c0d718889a356c: default/busybox/busybox" id=ea573c53-d6d9-4021-83a8-7ae36038128f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:38:52 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:52.069930106Z" level=info msg="Starting container: 897182a41035485dffc2a8a0b8542a2bfd5b3bf66b06267564c0d718889a356c" id=850ffbb0-9285-470b-b5ac-bb5278130dba name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:38:52 default-k8s-diff-port-245904 crio[835]: time="2025-11-01T10:38:52.074626766Z" level=info msg="Started container" PID=1777 containerID=897182a41035485dffc2a8a0b8542a2bfd5b3bf66b06267564c0d718889a356c description=default/busybox/busybox id=850ffbb0-9285-470b-b5ac-bb5278130dba name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa78ae213259124337cf5ba292565f5955117a9df30b806a714d19428c66ba9d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	897182a410354       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   fa78ae2132591       busybox                                                default
	176584836f9c7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   99e186cdf04d3       coredns-66bc5c9577-h2552                               kube-system
	7f95d011966e8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   75c8d4f21f806       storage-provisioner                                    kube-system
	6201f94622c63       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   0a42151d67ee3       kindnet-5xtxk                                          kube-system
	8f60cc5b1eb17       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   019ba42f5362a       kube-proxy-8d8hl                                       kube-system
	0b7b9f7eb5d92       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   d7c45c8340207       kube-scheduler-default-k8s-diff-port-245904            kube-system
	c87813952556f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   b0048dd8a3399       kube-controller-manager-default-k8s-diff-port-245904   kube-system
	b803a9a362605       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   21f76a2ad8757       kube-apiserver-default-k8s-diff-port-245904            kube-system
	5f7555ca70155       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   860b51744128a       etcd-default-k8s-diff-port-245904                      kube-system
	
	
	==> coredns [176584836f9c71b6b51be31214cf6c11e6d336da0feddbb905623540af784d1d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59755 - 35141 "HINFO IN 8019528082535615039.2879856541045275460. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021853614s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-245904
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-245904
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=default-k8s-diff-port-245904
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_38_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-245904
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:38:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:38:50 +0000   Sat, 01 Nov 2025 10:37:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:38:50 +0000   Sat, 01 Nov 2025 10:37:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:38:50 +0000   Sat, 01 Nov 2025 10:37:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:38:50 +0000   Sat, 01 Nov 2025 10:38:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-245904
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                50f868bb-abe9-4a86-b184-01355addeabf
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-h2552                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-245904                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-5xtxk                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-245904             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-245904    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-8d8hl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-245904             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeHasSufficientMemory  70s (x8 over 71s)  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 71s)  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 71s)  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-245904 event: Registered Node default-k8s-diff-port-245904 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-245904 status is now: NodeReady
	
	
	==> dmesg <==
	[ +28.184214] overlayfs: idmapped layers are currently not supported
	[  +3.680873] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	[ +26.122524] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5f7555ca70155eff89d434381acd173b3ed4614727fc57de86a016390539116d] <==
	{"level":"warn","ts":"2025-11-01T10:37:53.191839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.225663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.267220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.318861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.374044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.414614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.525493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.549069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.590751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.663806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.708249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.757977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.788777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.839275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:53.882126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:54.002213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:54.011236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:54.057908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:54.121414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:54.177777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:54.232560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:54.296726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:54.358071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:37:54.568314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42452","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:38:00.798576Z","caller":"traceutil/trace.go:172","msg":"trace[844771586] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"106.787379ms","start":"2025-11-01T10:38:00.691772Z","end":"2025-11-01T10:38:00.798469Z","steps":["trace[844771586] 'process raft request'  (duration: 56.84477ms)","trace[844771586] 'compare'  (duration: 49.30898ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:39:00 up  2:21,  0 user,  load average: 3.68, 4.02, 3.28
	Linux default-k8s-diff-port-245904 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6201f94622c6371573d1663e83b9c6b9356325d5d98a65eb73ff7faf56361af6] <==
	I1101 10:38:04.931520       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:38:05.021703       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:38:05.021841       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:38:05.021862       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:38:05.021878       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:38:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:38:05.219390       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:38:05.219417       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:38:05.219426       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:38:05.219792       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:38:35.219650       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 10:38:35.219650       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:38:35.219887       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:38:35.220887       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 10:38:36.720331       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:38:36.720368       1 metrics.go:72] Registering metrics
	I1101 10:38:36.720431       1 controller.go:711] "Syncing nftables rules"
	I1101 10:38:45.225885       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:38:45.225959       1 main.go:301] handling current node
	I1101 10:38:55.219370       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:38:55.219503       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b803a9a362605988826764153bc71ed451314bb1e46e629e193ad1128ad72106] <==
	I1101 10:37:56.704670       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:37:56.704701       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:37:56.716922       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:37:56.738654       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:37:56.756585       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:37:56.795857       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:37:56.808647       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:37:56.899328       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:37:57.054908       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:37:57.077018       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:37:57.077112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:37:58.069122       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:37:58.244102       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:37:58.397744       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:37:58.432906       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 10:37:58.435519       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:37:58.446643       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:37:58.720657       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:37:59.317285       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:37:59.377303       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:37:59.456882       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:38:04.365849       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 10:38:04.733933       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:38:04.785666       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:38:04.799507       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c87813952556f6dd471dd3f8288c6b9fffdc118d4230f1bff593380ec912389d] <==
	I1101 10:38:03.860906       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:38:03.860891       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:38:03.861587       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-245904"
	I1101 10:38:03.862389       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:38:03.862531       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:38:03.862556       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:38:03.864098       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:38:03.864311       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:38:03.874666       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:38:03.876074       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:38:03.884308       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-245904" podCIDRs=["10.244.0.0/24"]
	I1101 10:38:03.886451       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:38:03.896622       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:38:03.904961       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:38:03.909052       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:38:03.909155       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:38:03.909187       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:38:03.909417       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:38:03.909755       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:38:03.913569       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:38:03.913923       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:38:03.915279       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:38:03.922768       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:38:03.962621       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:38:48.870277       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8f60cc5b1eb174fccef4ebf428ae3430285606981657e5f289992201c196aa5e] <==
	I1101 10:38:04.931620       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:38:05.212469       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:38:05.321498       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:38:05.321541       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:38:05.321639       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:38:05.392950       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:38:05.393003       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:38:05.403667       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:38:05.404030       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:38:05.404046       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:38:05.409314       1 config.go:200] "Starting service config controller"
	I1101 10:38:05.409342       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:38:05.422012       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:38:05.422044       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:38:05.422077       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:38:05.422082       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:38:05.423204       1 config.go:309] "Starting node config controller"
	I1101 10:38:05.423214       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:38:05.423220       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:38:05.510804       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:38:05.523427       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:38:05.523470       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0b7b9f7eb5d9220f0c87e593ed393a0e439c9c267ec72ac2153ac022c2472408] <==
	I1101 10:37:56.825791       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:37:56.828330       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:37:56.832843       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:37:56.835054       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:37:56.832866       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 10:37:56.867338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:37:56.867847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:37:56.882044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:37:56.882306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:37:56.882368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:37:56.882478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:37:56.882520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:37:56.882556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:37:56.882595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:37:56.882641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:37:56.882680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:37:56.882713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:37:56.882747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:37:56.882777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:37:56.882811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:37:56.883265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:37:56.883327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:37:56.883365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:37:56.883398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1101 10:37:58.535787       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:38:03 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:03.943273    1293 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:38:03 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:03.944161    1293 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:38:04 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:04.478933    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6wsr\" (UniqueName: \"kubernetes.io/projected/309f6966-2ac7-41de-929d-dea12fe0b5a1-kube-api-access-q6wsr\") pod \"kube-proxy-8d8hl\" (UID: \"309f6966-2ac7-41de-929d-dea12fe0b5a1\") " pod="kube-system/kube-proxy-8d8hl"
	Nov 01 10:38:04 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:04.479473    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/759fb4c8-8029-4d6e-a86c-3cf89ef062bc-cni-cfg\") pod \"kindnet-5xtxk\" (UID: \"759fb4c8-8029-4d6e-a86c-3cf89ef062bc\") " pod="kube-system/kindnet-5xtxk"
	Nov 01 10:38:04 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:04.479640    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hsz9\" (UniqueName: \"kubernetes.io/projected/759fb4c8-8029-4d6e-a86c-3cf89ef062bc-kube-api-access-4hsz9\") pod \"kindnet-5xtxk\" (UID: \"759fb4c8-8029-4d6e-a86c-3cf89ef062bc\") " pod="kube-system/kindnet-5xtxk"
	Nov 01 10:38:04 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:04.479758    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/759fb4c8-8029-4d6e-a86c-3cf89ef062bc-xtables-lock\") pod \"kindnet-5xtxk\" (UID: \"759fb4c8-8029-4d6e-a86c-3cf89ef062bc\") " pod="kube-system/kindnet-5xtxk"
	Nov 01 10:38:04 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:04.479838    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/309f6966-2ac7-41de-929d-dea12fe0b5a1-kube-proxy\") pod \"kube-proxy-8d8hl\" (UID: \"309f6966-2ac7-41de-929d-dea12fe0b5a1\") " pod="kube-system/kube-proxy-8d8hl"
	Nov 01 10:38:04 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:04.479911    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/309f6966-2ac7-41de-929d-dea12fe0b5a1-xtables-lock\") pod \"kube-proxy-8d8hl\" (UID: \"309f6966-2ac7-41de-929d-dea12fe0b5a1\") " pod="kube-system/kube-proxy-8d8hl"
	Nov 01 10:38:04 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:04.479979    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/309f6966-2ac7-41de-929d-dea12fe0b5a1-lib-modules\") pod \"kube-proxy-8d8hl\" (UID: \"309f6966-2ac7-41de-929d-dea12fe0b5a1\") " pod="kube-system/kube-proxy-8d8hl"
	Nov 01 10:38:04 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:04.480060    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/759fb4c8-8029-4d6e-a86c-3cf89ef062bc-lib-modules\") pod \"kindnet-5xtxk\" (UID: \"759fb4c8-8029-4d6e-a86c-3cf89ef062bc\") " pod="kube-system/kindnet-5xtxk"
	Nov 01 10:38:04 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:04.602051    1293 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 10:38:04 default-k8s-diff-port-245904 kubelet[1293]: W1101 10:38:04.745513    1293 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/crio-0a42151d67ee304f639f833243f903d3192b3e67b2ea324012573b8f9b0d46ad WatchSource:0}: Error finding container 0a42151d67ee304f639f833243f903d3192b3e67b2ea324012573b8f9b0d46ad: Status 404 returned error can't find the container with id 0a42151d67ee304f639f833243f903d3192b3e67b2ea324012573b8f9b0d46ad
	Nov 01 10:38:05 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:05.768924    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5xtxk" podStartSLOduration=1.76890509 podStartE2EDuration="1.76890509s" podCreationTimestamp="2025-11-01 10:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:38:05.720143602 +0000 UTC m=+6.519717496" watchObservedRunningTime="2025-11-01 10:38:05.76890509 +0000 UTC m=+6.568478993"
	Nov 01 10:38:10 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:10.392574    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8d8hl" podStartSLOduration=6.392546951 podStartE2EDuration="6.392546951s" podCreationTimestamp="2025-11-01 10:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:38:05.804028232 +0000 UTC m=+6.603602135" watchObservedRunningTime="2025-11-01 10:38:10.392546951 +0000 UTC m=+11.192120854"
	Nov 01 10:38:45 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:45.388777    1293 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:38:45 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:45.524101    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8285\" (UniqueName: \"kubernetes.io/projected/6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08-kube-api-access-q8285\") pod \"storage-provisioner\" (UID: \"6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08\") " pod="kube-system/storage-provisioner"
	Nov 01 10:38:45 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:45.524326    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08-tmp\") pod \"storage-provisioner\" (UID: \"6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08\") " pod="kube-system/storage-provisioner"
	Nov 01 10:38:45 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:45.524443    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1f6d1e6-b67e-4d63-af54-505fd8515afa-config-volume\") pod \"coredns-66bc5c9577-h2552\" (UID: \"f1f6d1e6-b67e-4d63-af54-505fd8515afa\") " pod="kube-system/coredns-66bc5c9577-h2552"
	Nov 01 10:38:45 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:45.524536    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpht2\" (UniqueName: \"kubernetes.io/projected/f1f6d1e6-b67e-4d63-af54-505fd8515afa-kube-api-access-kpht2\") pod \"coredns-66bc5c9577-h2552\" (UID: \"f1f6d1e6-b67e-4d63-af54-505fd8515afa\") " pod="kube-system/coredns-66bc5c9577-h2552"
	Nov 01 10:38:45 default-k8s-diff-port-245904 kubelet[1293]: W1101 10:38:45.811983    1293 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/crio-75c8d4f21f806f3447ece1886e398df399af5f931d7ecb2b516e6a5494c93c0a WatchSource:0}: Error finding container 75c8d4f21f806f3447ece1886e398df399af5f931d7ecb2b516e6a5494c93c0a: Status 404 returned error can't find the container with id 75c8d4f21f806f3447ece1886e398df399af5f931d7ecb2b516e6a5494c93c0a
	Nov 01 10:38:45 default-k8s-diff-port-245904 kubelet[1293]: W1101 10:38:45.857813    1293 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/crio-99e186cdf04d3549351e8f9961648086b30469f1e52945a64540926ff9f68e11 WatchSource:0}: Error finding container 99e186cdf04d3549351e8f9961648086b30469f1e52945a64540926ff9f68e11: Status 404 returned error can't find the container with id 99e186cdf04d3549351e8f9961648086b30469f1e52945a64540926ff9f68e11
	Nov 01 10:38:46 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:46.797963    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.797922738 podStartE2EDuration="40.797922738s" podCreationTimestamp="2025-11-01 10:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:38:46.783248268 +0000 UTC m=+47.582822171" watchObservedRunningTime="2025-11-01 10:38:46.797922738 +0000 UTC m=+47.597496633"
	Nov 01 10:38:49 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:49.386455    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h2552" podStartSLOduration=45.38643704 podStartE2EDuration="45.38643704s" podCreationTimestamp="2025-11-01 10:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:38:46.800588543 +0000 UTC m=+47.600162438" watchObservedRunningTime="2025-11-01 10:38:49.38643704 +0000 UTC m=+50.186010943"
	Nov 01 10:38:49 default-k8s-diff-port-245904 kubelet[1293]: I1101 10:38:49.460901    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxt8w\" (UniqueName: \"kubernetes.io/projected/449ee4be-9b51-4739-a427-f668f7aa9729-kube-api-access-pxt8w\") pod \"busybox\" (UID: \"449ee4be-9b51-4739-a427-f668f7aa9729\") " pod="default/busybox"
	Nov 01 10:38:49 default-k8s-diff-port-245904 kubelet[1293]: W1101 10:38:49.730368    1293 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/crio-fa78ae213259124337cf5ba292565f5955117a9df30b806a714d19428c66ba9d WatchSource:0}: Error finding container fa78ae213259124337cf5ba292565f5955117a9df30b806a714d19428c66ba9d: Status 404 returned error can't find the container with id fa78ae213259124337cf5ba292565f5955117a9df30b806a714d19428c66ba9d
	
	
	==> storage-provisioner [7f95d011966e8a42942d6be64686a2f336049800616ab500ae88eab8c395788c] <==
	I1101 10:38:45.919401       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:38:45.934316       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:38:45.934360       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:38:45.942449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:45.952057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:38:45.952470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:38:45.954512       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-245904_a9a8bfd8-3eda-4a2e-8348-19f108ef6552!
	I1101 10:38:45.954629       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e194f99-8f93-4855-b159-998a98b1e129", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-245904_a9a8bfd8-3eda-4a2e-8348-19f108ef6552 became leader
	W1101 10:38:45.972071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:45.980899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:38:46.055364       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-245904_a9a8bfd8-3eda-4a2e-8348-19f108ef6552!
	W1101 10:38:47.983611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:47.989473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:49.992884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:49.998136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:52.006372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:52.020505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:54.024787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:54.034775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:56.039559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:56.044248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:58.048484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:38:58.058302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:39:00.073518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:39:00.090423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-245904 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-245904 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-245904 --alsologtostderr -v=1: exit status 80 (1.896476617s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-245904 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:40:20.651823  493183 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:40:20.651991  493183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:40:20.652002  493183 out.go:374] Setting ErrFile to fd 2...
	I1101 10:40:20.652007  493183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:40:20.652255  493183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:40:20.652519  493183 out.go:368] Setting JSON to false
	I1101 10:40:20.652541  493183 mustload.go:66] Loading cluster: default-k8s-diff-port-245904
	I1101 10:40:20.652930  493183 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:40:20.653393  493183 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:40:20.673214  493183 host.go:66] Checking if "default-k8s-diff-port-245904" exists ...
	I1101 10:40:20.673536  493183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:40:20.746253  493183 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:40:20.727744494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:40:20.746958  493183 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-245904 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:40:20.750958  493183 out.go:179] * Pausing node default-k8s-diff-port-245904 ... 
	I1101 10:40:20.754378  493183 host.go:66] Checking if "default-k8s-diff-port-245904" exists ...
	I1101 10:40:20.754728  493183 ssh_runner.go:195] Run: systemctl --version
	I1101 10:40:20.754780  493183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:40:20.774340  493183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:40:20.880730  493183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:40:20.908352  493183 pause.go:52] kubelet running: true
	I1101 10:40:20.908425  493183 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:40:21.145025  493183 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:40:21.145117  493183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:40:21.218891  493183 cri.go:89] found id: "d46a0edeb94014e2b6de899870e120c1e9663026c65d0bae3809f3f4a5097fd4"
	I1101 10:40:21.218975  493183 cri.go:89] found id: "98c11ffd4d3f91309c84aba212eabefcb80ccd370b1c392fdbd639ef33c9cf14"
	I1101 10:40:21.218995  493183 cri.go:89] found id: "f8a20eb3878fb74917aa7efd04e8592e15bb898b2148768ed94f97fa6c1e0aff"
	I1101 10:40:21.219006  493183 cri.go:89] found id: "b7b00512262aea3dcc035878abe865da07ea524a984e03217db4298decd3413f"
	I1101 10:40:21.219010  493183 cri.go:89] found id: "b839606527a0b636e484040e6f65caadbe27fa5fd6f705b9d1a78d038a9ccdac"
	I1101 10:40:21.219014  493183 cri.go:89] found id: "d782666800538b469e418a5f838868b74612a893a1e3a0765dd3ca1190d13821"
	I1101 10:40:21.219017  493183 cri.go:89] found id: "f9910db4dfddad6c3e5a4f8b750b121b8871d21bdf0d44561df2a5718b2e3e39"
	I1101 10:40:21.219020  493183 cri.go:89] found id: "9cfafd062ccb475a6b1b6b434b2b13c9f646113eeda200d84df703684661e573"
	I1101 10:40:21.219023  493183 cri.go:89] found id: "30e834d8a77dcb064a27c0c12896c576a1ecda9002b655df2d47b3c124e33ac2"
	I1101 10:40:21.219030  493183 cri.go:89] found id: "354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5"
	I1101 10:40:21.219033  493183 cri.go:89] found id: "4e1c18e366f011597bd4500e494e129d7e239722c028290b019581f02af5459f"
	I1101 10:40:21.219049  493183 cri.go:89] found id: ""
	I1101 10:40:21.219107  493183 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:40:21.230441  493183 retry.go:31] will retry after 234.110609ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:40:21Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:40:21.464790  493183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:40:21.478088  493183 pause.go:52] kubelet running: false
	I1101 10:40:21.478153  493183 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:40:21.656690  493183 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:40:21.656763  493183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:40:21.733761  493183 cri.go:89] found id: "d46a0edeb94014e2b6de899870e120c1e9663026c65d0bae3809f3f4a5097fd4"
	I1101 10:40:21.733787  493183 cri.go:89] found id: "98c11ffd4d3f91309c84aba212eabefcb80ccd370b1c392fdbd639ef33c9cf14"
	I1101 10:40:21.733793  493183 cri.go:89] found id: "f8a20eb3878fb74917aa7efd04e8592e15bb898b2148768ed94f97fa6c1e0aff"
	I1101 10:40:21.733797  493183 cri.go:89] found id: "b7b00512262aea3dcc035878abe865da07ea524a984e03217db4298decd3413f"
	I1101 10:40:21.733800  493183 cri.go:89] found id: "b839606527a0b636e484040e6f65caadbe27fa5fd6f705b9d1a78d038a9ccdac"
	I1101 10:40:21.733803  493183 cri.go:89] found id: "d782666800538b469e418a5f838868b74612a893a1e3a0765dd3ca1190d13821"
	I1101 10:40:21.733806  493183 cri.go:89] found id: "f9910db4dfddad6c3e5a4f8b750b121b8871d21bdf0d44561df2a5718b2e3e39"
	I1101 10:40:21.733829  493183 cri.go:89] found id: "9cfafd062ccb475a6b1b6b434b2b13c9f646113eeda200d84df703684661e573"
	I1101 10:40:21.733840  493183 cri.go:89] found id: "30e834d8a77dcb064a27c0c12896c576a1ecda9002b655df2d47b3c124e33ac2"
	I1101 10:40:21.733847  493183 cri.go:89] found id: "354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5"
	I1101 10:40:21.733850  493183 cri.go:89] found id: "4e1c18e366f011597bd4500e494e129d7e239722c028290b019581f02af5459f"
	I1101 10:40:21.733854  493183 cri.go:89] found id: ""
	I1101 10:40:21.733923  493183 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:40:21.746075  493183 retry.go:31] will retry after 447.35109ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:40:21Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:40:22.193742  493183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:40:22.206790  493183 pause.go:52] kubelet running: false
	I1101 10:40:22.206912  493183 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:40:22.381755  493183 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:40:22.381886  493183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:40:22.449154  493183 cri.go:89] found id: "d46a0edeb94014e2b6de899870e120c1e9663026c65d0bae3809f3f4a5097fd4"
	I1101 10:40:22.449221  493183 cri.go:89] found id: "98c11ffd4d3f91309c84aba212eabefcb80ccd370b1c392fdbd639ef33c9cf14"
	I1101 10:40:22.449240  493183 cri.go:89] found id: "f8a20eb3878fb74917aa7efd04e8592e15bb898b2148768ed94f97fa6c1e0aff"
	I1101 10:40:22.449260  493183 cri.go:89] found id: "b7b00512262aea3dcc035878abe865da07ea524a984e03217db4298decd3413f"
	I1101 10:40:22.449280  493183 cri.go:89] found id: "b839606527a0b636e484040e6f65caadbe27fa5fd6f705b9d1a78d038a9ccdac"
	I1101 10:40:22.449300  493183 cri.go:89] found id: "d782666800538b469e418a5f838868b74612a893a1e3a0765dd3ca1190d13821"
	I1101 10:40:22.449319  493183 cri.go:89] found id: "f9910db4dfddad6c3e5a4f8b750b121b8871d21bdf0d44561df2a5718b2e3e39"
	I1101 10:40:22.449339  493183 cri.go:89] found id: "9cfafd062ccb475a6b1b6b434b2b13c9f646113eeda200d84df703684661e573"
	I1101 10:40:22.449358  493183 cri.go:89] found id: "30e834d8a77dcb064a27c0c12896c576a1ecda9002b655df2d47b3c124e33ac2"
	I1101 10:40:22.449381  493183 cri.go:89] found id: "354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5"
	I1101 10:40:22.449399  493183 cri.go:89] found id: "4e1c18e366f011597bd4500e494e129d7e239722c028290b019581f02af5459f"
	I1101 10:40:22.449423  493183 cri.go:89] found id: ""
	I1101 10:40:22.449505  493183 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:40:22.465286  493183 out.go:203] 
	W1101 10:40:22.468636  493183 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:40:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:40:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:40:22.468662  493183 out.go:285] * 
	* 
	W1101 10:40:22.476605  493183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:40:22.479848  493183 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-245904 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-245904
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-245904:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e",
	        "Created": "2025-11-01T10:37:31.035014069Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489736,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:39:14.832090165Z",
	            "FinishedAt": "2025-11-01T10:39:14.002746879Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/hosts",
	        "LogPath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e-json.log",
	        "Name": "/default-k8s-diff-port-245904",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-245904:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-245904",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e",
	                "LowerDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-245904",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-245904/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-245904",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-245904",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-245904",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43509f8221a7cd70d36ba1dbdcc428a50956b78274ed1b4d20546c06da2fb41e",
	            "SandboxKey": "/var/run/docker/netns/43509f8221a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-245904": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:bb:2c:7b:fc:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ca453ec076d50791763a6c741bc9e74267d64bf587acdd7076e49fdbf14831b1",
	                    "EndpointID": "eb9790fe5ce71d770e0adad2bf1fa0cace1caeebd9dab0efaf2474778ad41386",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-245904",
	                        "a7be6b4a2a88"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904: exit status 2 (357.461293ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-245904 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-245904 logs -n 25: (1.415571445s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-170467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p disable-driver-mounts-416512                                                                                                                                                                                                               │ disable-driver-mounts-416512 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ image   │ embed-certs-618070 image list --format=json                                                                                                                                                                                                   │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-618070 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p newest-cni-761749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ stop    │ -p newest-cni-761749 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-761749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ image   │ newest-cni-761749 image list --format=json                                                                                                                                                                                                    │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-761749 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-245904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-245904 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ delete  │ -p newest-cni-761749                                                                                                                                                                                                                          │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ delete  │ -p newest-cni-761749                                                                                                                                                                                                                          │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ start   │ -p auto-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-220636                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-245904 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ start   │ -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:40 UTC │
	│ image   │ default-k8s-diff-port-245904 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ pause   │ -p default-k8s-diff-port-245904 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:39:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:39:14.554230  489608 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:39:14.554611  489608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:39:14.554647  489608 out.go:374] Setting ErrFile to fd 2...
	I1101 10:39:14.554667  489608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:39:14.554965  489608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:39:14.555387  489608 out.go:368] Setting JSON to false
	I1101 10:39:14.556303  489608 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8504,"bootTime":1761985051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:39:14.556402  489608 start.go:143] virtualization:  
	I1101 10:39:14.559114  489608 out.go:179] * [default-k8s-diff-port-245904] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:39:14.563072  489608 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:39:14.563188  489608 notify.go:221] Checking for updates...
	I1101 10:39:14.569090  489608 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:39:14.572153  489608 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:39:14.575046  489608 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:39:14.577866  489608 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:39:14.580680  489608 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:39:14.584036  489608 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:14.584578  489608 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:39:14.614923  489608 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:39:14.615056  489608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:39:14.670936  489608 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:39:14.661754965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:39:14.671052  489608 docker.go:319] overlay module found
	I1101 10:39:14.674304  489608 out.go:179] * Using the docker driver based on existing profile
	I1101 10:39:14.677137  489608 start.go:309] selected driver: docker
	I1101 10:39:14.677156  489608 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:39:14.677242  489608 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:39:14.678136  489608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:39:14.743793  489608 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:39:14.723968749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:39:14.744146  489608 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:39:14.744184  489608 cni.go:84] Creating CNI manager for ""
	I1101 10:39:14.744247  489608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:39:14.744291  489608 start.go:353] cluster config:
	{Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:39:14.747560  489608 out.go:179] * Starting "default-k8s-diff-port-245904" primary control-plane node in "default-k8s-diff-port-245904" cluster
	I1101 10:39:14.750445  489608 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:39:14.753508  489608 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:39:14.756409  489608 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:39:14.756477  489608 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:39:14.756488  489608 cache.go:59] Caching tarball of preloaded images
	I1101 10:39:14.756514  489608 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:39:14.756592  489608 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:39:14.756603  489608 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:39:14.756724  489608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/config.json ...
	I1101 10:39:14.776988  489608 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:39:14.777013  489608 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:39:14.777028  489608 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:39:14.777055  489608 start.go:360] acquireMachinesLock for default-k8s-diff-port-245904: {Name:mkd19cff2a35f3bd59a365809e4cb064a7918a80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:39:14.777121  489608 start.go:364] duration metric: took 38.36µs to acquireMachinesLock for "default-k8s-diff-port-245904"
	I1101 10:39:14.777148  489608 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:39:14.777157  489608 fix.go:54] fixHost starting: 
	I1101 10:39:14.777424  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:14.795225  489608 fix.go:112] recreateIfNeeded on default-k8s-diff-port-245904: state=Stopped err=<nil>
	W1101 10:39:14.795257  489608 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:39:10.826171  488406 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-220636:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.578274612s)
	I1101 10:39:10.826202  488406 kic.go:203] duration metric: took 4.578407841s to extract preloaded images to volume ...
	W1101 10:39:10.826343  488406 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:39:10.826460  488406 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:39:10.883366  488406 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-220636 --name auto-220636 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-220636 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-220636 --network auto-220636 --ip 192.168.85.2 --volume auto-220636:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:39:11.193752  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Running}}
	I1101 10:39:11.217780  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:11.242360  488406 cli_runner.go:164] Run: docker exec auto-220636 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:39:11.293531  488406 oci.go:144] the created container "auto-220636" has a running status.
	I1101 10:39:11.293560  488406 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa...
	I1101 10:39:11.898308  488406 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:39:11.920583  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:11.937491  488406 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:39:11.937510  488406 kic_runner.go:114] Args: [docker exec --privileged auto-220636 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:39:11.977405  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:11.995291  488406 machine.go:94] provisionDockerMachine start ...
	I1101 10:39:11.995399  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:12.013670  488406 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:12.014043  488406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1101 10:39:12.014064  488406 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:39:12.014783  488406 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:39:15.201830  488406 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-220636
	
	I1101 10:39:15.201861  488406 ubuntu.go:182] provisioning hostname "auto-220636"
	I1101 10:39:15.201927  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:15.227360  488406 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:15.227684  488406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1101 10:39:15.227703  488406 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-220636 && echo "auto-220636" | sudo tee /etc/hostname
	I1101 10:39:15.422162  488406 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-220636
	
	I1101 10:39:15.422263  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:15.451154  488406 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:15.451485  488406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1101 10:39:15.451509  488406 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-220636' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-220636/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-220636' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:39:15.629038  488406 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:39:15.629069  488406 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:39:15.629089  488406 ubuntu.go:190] setting up certificates
	I1101 10:39:15.629099  488406 provision.go:84] configureAuth start
	I1101 10:39:15.629160  488406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-220636
	I1101 10:39:15.660825  488406 provision.go:143] copyHostCerts
	I1101 10:39:15.660886  488406 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:39:15.660898  488406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:39:15.660968  488406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:39:15.661064  488406 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:39:15.661080  488406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:39:15.661108  488406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:39:15.661179  488406 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:39:15.661189  488406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:39:15.661217  488406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:39:15.661281  488406 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.auto-220636 san=[127.0.0.1 192.168.85.2 auto-220636 localhost minikube]
	I1101 10:39:15.926962  488406 provision.go:177] copyRemoteCerts
	I1101 10:39:15.927095  488406 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:39:15.927160  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:15.944207  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:16.050866  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:39:16.070966  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:39:16.090157  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:39:16.109457  488406 provision.go:87] duration metric: took 480.33261ms to configureAuth
	I1101 10:39:16.109528  488406 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:39:16.109845  488406 config.go:182] Loaded profile config "auto-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:16.109970  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:16.127653  488406 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:16.127969  488406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1101 10:39:16.127989  488406 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:39:16.387681  488406 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:39:16.387715  488406 machine.go:97] duration metric: took 4.392395135s to provisionDockerMachine
	I1101 10:39:16.387726  488406 client.go:176] duration metric: took 10.832974346s to LocalClient.Create
	I1101 10:39:16.387739  488406 start.go:167] duration metric: took 10.833041728s to libmachine.API.Create "auto-220636"
	I1101 10:39:16.387746  488406 start.go:293] postStartSetup for "auto-220636" (driver="docker")
	I1101 10:39:16.387761  488406 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:39:16.387823  488406 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:39:16.387865  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:16.406181  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:16.514067  488406 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:39:16.517569  488406 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:39:16.517599  488406 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:39:16.517610  488406 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:39:16.517682  488406 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:39:16.517816  488406 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:39:16.517931  488406 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:39:16.525474  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:39:16.544713  488406 start.go:296] duration metric: took 156.951232ms for postStartSetup
	I1101 10:39:16.545078  488406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-220636
	I1101 10:39:16.567817  488406 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/config.json ...
	I1101 10:39:16.568098  488406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:39:16.568141  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:16.585290  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:16.686761  488406 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:39:16.691236  488406 start.go:128] duration metric: took 11.140245372s to createHost
	I1101 10:39:16.691259  488406 start.go:83] releasing machines lock for "auto-220636", held for 11.140379463s
	I1101 10:39:16.691337  488406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-220636
	I1101 10:39:16.708386  488406 ssh_runner.go:195] Run: cat /version.json
	I1101 10:39:16.708449  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:16.708541  488406 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:39:16.708599  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:16.728958  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:16.738886  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:16.833505  488406 ssh_runner.go:195] Run: systemctl --version
	I1101 10:39:16.957761  488406 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:39:16.994506  488406 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:39:16.999001  488406 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:39:16.999149  488406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:39:17.028653  488406 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:39:17.028675  488406 start.go:496] detecting cgroup driver to use...
	I1101 10:39:17.028707  488406 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:39:17.028756  488406 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:39:17.047641  488406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:39:17.060827  488406 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:39:17.060931  488406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:39:17.078861  488406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:39:17.098245  488406 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:39:17.219128  488406 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:39:17.338425  488406 docker.go:234] disabling docker service ...
	I1101 10:39:17.338497  488406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:39:17.359788  488406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:39:17.373387  488406 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:39:17.497970  488406 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:39:17.620484  488406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:39:17.633447  488406 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:39:17.647602  488406 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:39:17.647669  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.656611  488406 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:39:17.656711  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.666009  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.674725  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.683730  488406 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:39:17.691979  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.701018  488406 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.714896  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.724133  488406 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:39:17.732311  488406 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:39:17.740030  488406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:17.857738  488406 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:39:17.982190  488406 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:39:17.982309  488406 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:39:17.986567  488406 start.go:564] Will wait 60s for crictl version
	I1101 10:39:17.986639  488406 ssh_runner.go:195] Run: which crictl
	I1101 10:39:17.990288  488406 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:39:18.021331  488406 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:39:18.021496  488406 ssh_runner.go:195] Run: crio --version
	I1101 10:39:18.049433  488406 ssh_runner.go:195] Run: crio --version
	I1101 10:39:18.087950  488406 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:39:14.798658  489608 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-245904" ...
	I1101 10:39:14.798750  489608 cli_runner.go:164] Run: docker start default-k8s-diff-port-245904
	I1101 10:39:15.110369  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:15.135998  489608 kic.go:430] container "default-k8s-diff-port-245904" state is running.
	I1101 10:39:15.136406  489608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:39:15.166813  489608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/config.json ...
	I1101 10:39:15.167069  489608 machine.go:94] provisionDockerMachine start ...
	I1101 10:39:15.167132  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:15.186784  489608 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:15.187140  489608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1101 10:39:15.187157  489608 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:39:15.187873  489608 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:39:18.353625  489608 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-245904
	
	I1101 10:39:18.353666  489608 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-245904"
	I1101 10:39:18.353748  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:18.373041  489608 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:18.373341  489608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1101 10:39:18.373359  489608 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-245904 && echo "default-k8s-diff-port-245904" | sudo tee /etc/hostname
	I1101 10:39:18.578271  489608 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-245904
	
	I1101 10:39:18.578353  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:18.599691  489608 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:18.599990  489608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1101 10:39:18.600009  489608 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-245904' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-245904/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-245904' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:39:18.767431  489608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:39:18.767461  489608 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:39:18.767498  489608 ubuntu.go:190] setting up certificates
	I1101 10:39:18.767524  489608 provision.go:84] configureAuth start
	I1101 10:39:18.767608  489608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:39:18.793853  489608 provision.go:143] copyHostCerts
	I1101 10:39:18.793943  489608 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:39:18.793962  489608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:39:18.794050  489608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:39:18.794168  489608 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:39:18.794180  489608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:39:18.794212  489608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:39:18.794288  489608 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:39:18.794298  489608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:39:18.794330  489608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:39:18.794400  489608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-245904 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-245904 localhost minikube]
	I1101 10:39:19.325859  489608 provision.go:177] copyRemoteCerts
	I1101 10:39:19.325931  489608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:39:19.325994  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:19.344682  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:19.470641  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:39:19.490897  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 10:39:19.511301  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:39:19.531077  489608 provision.go:87] duration metric: took 763.5269ms to configureAuth
	I1101 10:39:19.531101  489608 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:39:19.531299  489608 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:19.531405  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:18.090876  488406 cli_runner.go:164] Run: docker network inspect auto-220636 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:39:18.107815  488406 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:39:18.111883  488406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:39:18.122236  488406 kubeadm.go:884] updating cluster {Name:auto-220636 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-220636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:39:18.122356  488406 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:39:18.122418  488406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:39:18.159986  488406 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:39:18.160010  488406 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:39:18.160068  488406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:39:18.185725  488406 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:39:18.185746  488406 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:39:18.185754  488406 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:39:18.185851  488406 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-220636 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-220636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:39:18.185930  488406 ssh_runner.go:195] Run: crio config
	I1101 10:39:18.263075  488406 cni.go:84] Creating CNI manager for ""
	I1101 10:39:18.263503  488406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:39:18.263524  488406 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:39:18.263579  488406 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-220636 NodeName:auto-220636 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:39:18.263729  488406 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-220636"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:39:18.263817  488406 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:39:18.274660  488406 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:39:18.274774  488406 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:39:18.284113  488406 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1101 10:39:18.299022  488406 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:39:18.314666  488406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1101 10:39:18.328719  488406 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:39:18.332446  488406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:39:18.342649  488406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:18.498815  488406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:39:18.515322  488406 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636 for IP: 192.168.85.2
	I1101 10:39:18.515344  488406 certs.go:195] generating shared ca certs ...
	I1101 10:39:18.515360  488406 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:18.515495  488406 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:39:18.515542  488406 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:39:18.515552  488406 certs.go:257] generating profile certs ...
	I1101 10:39:18.515607  488406 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.key
	I1101 10:39:18.515625  488406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt with IP's: []
	I1101 10:39:19.161666  488406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt ...
	I1101 10:39:19.161759  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: {Name:mk6431b3df0d248a167255a91e18586ae16b9974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.161992  488406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.key ...
	I1101 10:39:19.162033  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.key: {Name:mk593c24b085637d1e3004773d11fa7baec8761e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.162178  488406 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key.a5c9aff1
	I1101 10:39:19.162221  488406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt.a5c9aff1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:39:19.426859  488406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt.a5c9aff1 ...
	I1101 10:39:19.426895  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt.a5c9aff1: {Name:mk01906c5c93f94bf5ff3c4d19c73a9d57fb53d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.427137  488406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key.a5c9aff1 ...
	I1101 10:39:19.427155  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key.a5c9aff1: {Name:mk00bcf5f7d2853eb6eeaf5cecf8f0b4733f15b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.427264  488406 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt.a5c9aff1 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt
	I1101 10:39:19.427355  488406 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key.a5c9aff1 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key
	I1101 10:39:19.427417  488406 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.key
	I1101 10:39:19.427438  488406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.crt with IP's: []
	I1101 10:39:19.715157  488406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.crt ...
	I1101 10:39:19.715189  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.crt: {Name:mkcc5b12f0ed8ca4d8068df2908c316e1853316b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.715388  488406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.key ...
	I1101 10:39:19.715401  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.key: {Name:mk50898e43091d82395d7464c9b66369c615007c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.715600  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:39:19.715645  488406 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:39:19.715655  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:39:19.715679  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:39:19.715712  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:39:19.715734  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:39:19.715780  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:39:19.716335  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:39:19.736287  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:39:19.756385  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:39:19.777609  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:39:19.797250  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1101 10:39:19.815788  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:39:19.833572  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:39:19.858377  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:39:19.879647  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:39:19.900190  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:39:19.922906  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:39:19.958198  488406 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:39:19.978221  488406 ssh_runner.go:195] Run: openssl version
	I1101 10:39:19.984508  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:39:19.993511  488406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:19.997469  488406 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:19.997536  488406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:20.040193  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:39:20.051060  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:39:20.060240  488406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:39:20.064365  488406 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:39:20.064429  488406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:39:20.108700  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:39:20.117675  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:39:20.126716  488406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:39:20.132686  488406 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:39:20.132754  488406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:39:20.192952  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:39:20.205890  488406 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:39:20.209851  488406 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:39:20.209908  488406 kubeadm.go:401] StartCluster: {Name:auto-220636 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-220636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:39:20.209990  488406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:39:20.210054  488406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:39:20.245999  488406 cri.go:89] found id: ""
	I1101 10:39:20.246076  488406 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:39:20.254336  488406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:39:20.262185  488406 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:39:20.262250  488406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:39:20.272341  488406 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:39:20.272355  488406 kubeadm.go:158] found existing configuration files:
	
	I1101 10:39:20.272393  488406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:39:20.283529  488406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:39:20.283597  488406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:39:20.292775  488406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:39:20.303228  488406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:39:20.303292  488406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:39:20.312334  488406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:39:20.329419  488406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:39:20.329486  488406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:39:20.339754  488406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:39:20.349136  488406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:39:20.349207  488406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:39:20.357952  488406 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:39:20.409348  488406 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:39:20.409717  488406 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:39:20.444121  488406 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:39:20.444249  488406 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:39:20.444308  488406 kubeadm.go:319] OS: Linux
	I1101 10:39:20.444382  488406 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:39:20.444464  488406 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:39:20.444545  488406 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:39:20.444629  488406 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:39:20.444710  488406 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:39:20.444788  488406 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:39:20.444855  488406 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:39:20.444916  488406 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:39:20.444971  488406 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:39:20.533056  488406 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:39:20.533172  488406 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:39:20.533268  488406 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:39:20.542396  488406 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:39:19.575799  489608 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:19.576200  489608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1101 10:39:19.576221  489608 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:39:19.945808  489608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:39:19.945835  489608 machine.go:97] duration metric: took 4.77875463s to provisionDockerMachine
	I1101 10:39:19.945847  489608 start.go:293] postStartSetup for "default-k8s-diff-port-245904" (driver="docker")
	I1101 10:39:19.945858  489608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:39:19.945930  489608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:39:19.945976  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:19.973071  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:20.087653  489608 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:39:20.092462  489608 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:39:20.092490  489608 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:39:20.092501  489608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:39:20.092560  489608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:39:20.092648  489608 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:39:20.092756  489608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:39:20.102551  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:39:20.127559  489608 start.go:296] duration metric: took 181.699212ms for postStartSetup
	I1101 10:39:20.127674  489608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:39:20.127744  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:20.149790  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:20.272005  489608 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:39:20.279190  489608 fix.go:56] duration metric: took 5.502025419s for fixHost
	I1101 10:39:20.279212  489608 start.go:83] releasing machines lock for "default-k8s-diff-port-245904", held for 5.50207959s
	I1101 10:39:20.279277  489608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:39:20.298317  489608 ssh_runner.go:195] Run: cat /version.json
	I1101 10:39:20.298363  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:20.298585  489608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:39:20.298661  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:20.331251  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:20.337266  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:20.466384  489608 ssh_runner.go:195] Run: systemctl --version
	I1101 10:39:20.568402  489608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:39:20.619712  489608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:39:20.627022  489608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:39:20.627229  489608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:39:20.638856  489608 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:39:20.638948  489608 start.go:496] detecting cgroup driver to use...
	I1101 10:39:20.638994  489608 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:39:20.639089  489608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:39:20.660036  489608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:39:20.678635  489608 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:39:20.678770  489608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:39:20.700279  489608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:39:20.719091  489608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:39:20.868611  489608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:39:21.045986  489608 docker.go:234] disabling docker service ...
	I1101 10:39:21.046124  489608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:39:21.064377  489608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:39:21.079003  489608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:39:21.231745  489608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:39:21.378629  489608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:39:21.393044  489608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:39:21.408056  489608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:39:21.408135  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.417200  489608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:39:21.417268  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.426661  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.435911  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.446149  489608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:39:21.454784  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.463990  489608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.472878  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.481782  489608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:39:21.489499  489608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:39:21.497228  489608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:21.636576  489608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:39:21.801311  489608 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:39:21.801383  489608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:39:21.806064  489608 start.go:564] Will wait 60s for crictl version
	I1101 10:39:21.806214  489608 ssh_runner.go:195] Run: which crictl
	I1101 10:39:21.810712  489608 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:39:21.837052  489608 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:39:21.837200  489608 ssh_runner.go:195] Run: crio --version
	I1101 10:39:21.870707  489608 ssh_runner.go:195] Run: crio --version
	I1101 10:39:21.912339  489608 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:39:21.915332  489608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-245904 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:39:21.938411  489608 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:39:21.942613  489608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:39:21.964445  489608 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:39:21.964591  489608 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:39:21.964644  489608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:39:22.031995  489608 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:39:22.032016  489608 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:39:22.032073  489608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:39:22.072189  489608 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:39:22.072211  489608 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:39:22.072219  489608 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1101 10:39:22.072315  489608 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-245904 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:39:22.072399  489608 ssh_runner.go:195] Run: crio config
	I1101 10:39:22.153290  489608 cni.go:84] Creating CNI manager for ""
	I1101 10:39:22.153353  489608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:39:22.153395  489608 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:39:22.153440  489608 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-245904 NodeName:default-k8s-diff-port-245904 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:39:22.153640  489608 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-245904"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:39:22.153766  489608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:39:22.162833  489608 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:39:22.162984  489608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:39:22.171645  489608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 10:39:22.189478  489608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:39:22.203837  489608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 10:39:22.218175  489608 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:39:22.222170  489608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:39:22.232152  489608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:22.373569  489608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:39:22.391488  489608 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904 for IP: 192.168.76.2
	I1101 10:39:22.391507  489608 certs.go:195] generating shared ca certs ...
	I1101 10:39:22.391523  489608 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:22.391658  489608 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:39:22.391703  489608 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:39:22.391715  489608 certs.go:257] generating profile certs ...
	I1101 10:39:22.391798  489608 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.key
	I1101 10:39:22.391867  489608 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key.52ff7e67
	I1101 10:39:22.391902  489608 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key
	I1101 10:39:22.392005  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:39:22.392031  489608 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:39:22.392039  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:39:22.392064  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:39:22.392084  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:39:22.392106  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:39:22.392149  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:39:22.392789  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:39:22.433518  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:39:22.507030  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:39:22.593190  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:39:22.622164  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 10:39:22.674540  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:39:22.700101  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:39:22.730753  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:39:22.763770  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:39:22.780376  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:39:22.798279  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:39:22.814978  489608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:39:22.827941  489608 ssh_runner.go:195] Run: openssl version
	I1101 10:39:22.835470  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:39:22.843733  489608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:39:22.848843  489608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:39:22.848925  489608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:39:22.890271  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:39:22.898357  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:39:22.906697  489608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:39:22.912662  489608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:39:22.912745  489608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:39:22.956444  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:39:22.964543  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:39:22.973548  489608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:22.978394  489608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:22.978476  489608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:23.021014  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:39:23.029931  489608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:39:23.035044  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:39:23.102988  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:39:23.176320  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:39:23.250702  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:39:23.348534  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:39:23.476116  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:39:23.561168  489608 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:39:23.561265  489608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:39:23.561337  489608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:39:23.634963  489608 cri.go:89] found id: "d782666800538b469e418a5f838868b74612a893a1e3a0765dd3ca1190d13821"
	I1101 10:39:23.634987  489608 cri.go:89] found id: "f9910db4dfddad6c3e5a4f8b750b121b8871d21bdf0d44561df2a5718b2e3e39"
	I1101 10:39:23.635000  489608 cri.go:89] found id: "9cfafd062ccb475a6b1b6b434b2b13c9f646113eeda200d84df703684661e573"
	I1101 10:39:23.635004  489608 cri.go:89] found id: "30e834d8a77dcb064a27c0c12896c576a1ecda9002b655df2d47b3c124e33ac2"
	I1101 10:39:23.635008  489608 cri.go:89] found id: ""
	I1101 10:39:23.635074  489608 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:39:23.671705  489608 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:39:23Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:39:23.671820  489608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:39:23.702003  489608 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:39:23.702024  489608 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:39:23.702124  489608 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:39:23.714299  489608 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:39:23.714806  489608 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-245904" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:39:23.714921  489608 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-245904" cluster setting kubeconfig missing "default-k8s-diff-port-245904" context setting]
	I1101 10:39:23.715248  489608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:23.718487  489608 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:39:23.732327  489608 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:39:23.732360  489608 kubeadm.go:602] duration metric: took 30.329927ms to restartPrimaryControlPlane
	I1101 10:39:23.732370  489608 kubeadm.go:403] duration metric: took 171.21342ms to StartCluster
	I1101 10:39:23.732386  489608 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:23.732456  489608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:39:23.733138  489608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:23.733383  489608 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:39:23.733726  489608 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:39:23.733801  489608 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-245904"
	I1101 10:39:23.733818  489608 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-245904"
	W1101 10:39:23.733823  489608 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:39:23.733845  489608 host.go:66] Checking if "default-k8s-diff-port-245904" exists ...
	I1101 10:39:23.734306  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:23.734809  489608 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:23.734900  489608 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-245904"
	I1101 10:39:23.734925  489608 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-245904"
	W1101 10:39:23.734944  489608 addons.go:248] addon dashboard should already be in state true
	I1101 10:39:23.734988  489608 host.go:66] Checking if "default-k8s-diff-port-245904" exists ...
	I1101 10:39:23.735476  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:23.737783  489608 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-245904"
	I1101 10:39:23.737805  489608 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-245904"
	I1101 10:39:23.738090  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:23.740585  489608 out.go:179] * Verifying Kubernetes components...
	I1101 10:39:23.744131  489608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:23.786551  489608 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-245904"
	W1101 10:39:23.786573  489608 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:39:23.786597  489608 host.go:66] Checking if "default-k8s-diff-port-245904" exists ...
	I1101 10:39:23.787012  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:23.802632  489608 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:39:23.802746  489608 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:39:23.805626  489608 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:39:23.805645  489608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:39:23.805725  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:23.809570  489608 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:39:23.819848  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:39:23.819882  489608 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:39:23.819989  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:23.836735  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:23.843506  489608 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:39:23.843529  489608 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:39:23.843590  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:23.872933  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:23.883741  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:24.132699  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:39:24.132772  489608 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:39:24.249105  489608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:39:24.263422  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:39:24.263486  489608 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:39:24.304056  489608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:39:24.328572  489608 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-245904" to be "Ready" ...
	I1101 10:39:24.415051  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:39:24.415082  489608 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:39:24.416381  489608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:39:24.521244  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:39:24.521262  489608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:39:20.545935  488406 out.go:252]   - Generating certificates and keys ...
	I1101 10:39:20.546029  488406 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:39:20.546097  488406 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:39:20.888946  488406 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:39:22.323822  488406 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:39:22.562650  488406 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:39:23.465636  488406 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:39:24.618090  488406 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:39:24.618227  488406 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-220636 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:39:24.816201  488406 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:39:24.816342  488406 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-220636 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:39:24.750799  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:39:24.750827  489608 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:39:24.792237  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:39:24.792262  489608 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:39:24.842912  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:39:24.842936  489608 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:39:24.891553  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:39:24.891578  489608 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:39:24.930241  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:39:24.930268  489608 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:39:24.966615  489608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:39:25.661808  488406 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:39:26.858144  488406 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:39:26.938012  488406 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:39:26.938089  488406 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:39:28.330033  488406 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:39:28.658066  488406 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:39:29.337446  488406 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:39:29.770063  488406 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:39:30.410055  488406 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:39:30.410157  488406 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:39:30.420820  488406 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:39:31.803500  489608 node_ready.go:49] node "default-k8s-diff-port-245904" is "Ready"
	I1101 10:39:31.803532  489608 node_ready.go:38] duration metric: took 7.474877486s for node "default-k8s-diff-port-245904" to be "Ready" ...
	I1101 10:39:31.803547  489608 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:39:31.803604  489608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:39:30.424284  488406 out.go:252]   - Booting up control plane ...
	I1101 10:39:30.424400  488406 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:39:30.424482  488406 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:39:30.424561  488406 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:39:30.462107  488406 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:39:30.462225  488406 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:39:30.472619  488406 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:39:30.472965  488406 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:39:30.473014  488406 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:39:30.692054  488406 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:39:30.692179  488406 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:39:32.194031  488406 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50154482s
	I1101 10:39:32.197149  488406 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:39:32.197536  488406 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:39:32.198430  488406 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:39:32.198980  488406 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:39:35.270912  489608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.854503946s)
	I1101 10:39:35.271173  489608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.967046378s)
	I1101 10:39:35.741931  489608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.775273901s)
	I1101 10:39:35.742143  489608 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.938522475s)
	I1101 10:39:35.742203  489608 api_server.go:72] duration metric: took 12.008781433s to wait for apiserver process to appear ...
	I1101 10:39:35.742231  489608 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:39:35.742278  489608 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:39:35.744802  489608 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-245904 addons enable metrics-server
	
	I1101 10:39:35.747742  489608 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 10:39:35.750568  489608 addons.go:515] duration metric: took 12.016843001s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 10:39:35.773670  489608 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:39:35.773737  489608 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:39:36.242941  489608 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:39:36.253173  489608 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 10:39:36.254326  489608 api_server.go:141] control plane version: v1.34.1
	I1101 10:39:36.254353  489608 api_server.go:131] duration metric: took 512.101761ms to wait for apiserver health ...
	I1101 10:39:36.254363  489608 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:39:36.262045  489608 system_pods.go:59] 8 kube-system pods found
	I1101 10:39:36.262087  489608 system_pods.go:61] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:39:36.262098  489608 system_pods.go:61] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:39:36.262104  489608 system_pods.go:61] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:39:36.262112  489608 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:39:36.262118  489608 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:39:36.262127  489608 system_pods.go:61] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:39:36.262135  489608 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:39:36.262145  489608 system_pods.go:61] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Running
	I1101 10:39:36.262150  489608 system_pods.go:74] duration metric: took 7.781785ms to wait for pod list to return data ...
	I1101 10:39:36.262165  489608 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:39:36.265034  489608 default_sa.go:45] found service account: "default"
	I1101 10:39:36.265059  489608 default_sa.go:55] duration metric: took 2.887633ms for default service account to be created ...
	I1101 10:39:36.265069  489608 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:39:36.279773  489608 system_pods.go:86] 8 kube-system pods found
	I1101 10:39:36.279807  489608 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:39:36.279820  489608 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:39:36.279826  489608 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:39:36.279833  489608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:39:36.279838  489608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:39:36.279847  489608 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:39:36.279853  489608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:39:36.279864  489608 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Running
	I1101 10:39:36.279871  489608 system_pods.go:126] duration metric: took 14.796606ms to wait for k8s-apps to be running ...
	I1101 10:39:36.279883  489608 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:39:36.279939  489608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:39:36.316244  489608 system_svc.go:56] duration metric: took 36.351299ms WaitForService to wait for kubelet
	I1101 10:39:36.316273  489608 kubeadm.go:587] duration metric: took 12.582850527s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:39:36.316296  489608 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:39:36.324483  489608 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:39:36.324516  489608 node_conditions.go:123] node cpu capacity is 2
	I1101 10:39:36.324530  489608 node_conditions.go:105] duration metric: took 8.227282ms to run NodePressure ...
	I1101 10:39:36.324542  489608 start.go:242] waiting for startup goroutines ...
	I1101 10:39:36.324549  489608 start.go:247] waiting for cluster config update ...
	I1101 10:39:36.324561  489608 start.go:256] writing updated cluster config ...
	I1101 10:39:36.324860  489608 ssh_runner.go:195] Run: rm -f paused
	I1101 10:39:36.334200  489608 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:39:36.338418  489608 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h2552" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:39:38.352786  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	I1101 10:39:38.243518  488406 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.044631694s
	I1101 10:39:39.031577  488406 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.831127965s
	I1101 10:39:41.201029  488406 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.003051432s
	I1101 10:39:41.225285  488406 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:39:41.243999  488406 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:39:41.266896  488406 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:39:41.267561  488406 kubeadm.go:319] [mark-control-plane] Marking the node auto-220636 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:39:41.283386  488406 kubeadm.go:319] [bootstrap-token] Using token: go5y2n.yhiz6aziwoo1svrx
	I1101 10:39:41.286470  488406 out.go:252]   - Configuring RBAC rules ...
	I1101 10:39:41.286594  488406 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:39:41.295820  488406 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:39:41.307202  488406 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:39:41.313089  488406 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:39:41.320570  488406 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:39:41.326711  488406 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:39:41.608684  488406 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:39:42.160222  488406 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:39:42.615660  488406 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:39:42.615679  488406 kubeadm.go:319] 
	I1101 10:39:42.615743  488406 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:39:42.615748  488406 kubeadm.go:319] 
	I1101 10:39:42.615829  488406 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:39:42.615834  488406 kubeadm.go:319] 
	I1101 10:39:42.615860  488406 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:39:42.615922  488406 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:39:42.615975  488406 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:39:42.615979  488406 kubeadm.go:319] 
	I1101 10:39:42.616036  488406 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:39:42.616040  488406 kubeadm.go:319] 
	I1101 10:39:42.616090  488406 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:39:42.616094  488406 kubeadm.go:319] 
	I1101 10:39:42.616156  488406 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:39:42.616235  488406 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:39:42.616306  488406 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:39:42.616311  488406 kubeadm.go:319] 
	I1101 10:39:42.616401  488406 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:39:42.616481  488406 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:39:42.616486  488406 kubeadm.go:319] 
	I1101 10:39:42.616574  488406 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token go5y2n.yhiz6aziwoo1svrx \
	I1101 10:39:42.616682  488406 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 10:39:42.616703  488406 kubeadm.go:319] 	--control-plane 
	I1101 10:39:42.616707  488406 kubeadm.go:319] 
	I1101 10:39:42.616796  488406 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:39:42.616800  488406 kubeadm.go:319] 
	I1101 10:39:42.616886  488406 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token go5y2n.yhiz6aziwoo1svrx \
	I1101 10:39:42.616992  488406 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 10:39:42.625501  488406 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:39:42.625768  488406 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:39:42.625887  488406 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:39:42.625902  488406 cni.go:84] Creating CNI manager for ""
	I1101 10:39:42.625910  488406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:39:42.629207  488406 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 10:39:40.386895  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:42.844929  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	I1101 10:39:42.632031  488406 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:39:42.639086  488406 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:39:42.639105  488406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:39:42.661497  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:39:43.093345  488406 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:39:43.093483  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:43.093564  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-220636 minikube.k8s.io/updated_at=2025_11_01T10_39_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=auto-220636 minikube.k8s.io/primary=true
	I1101 10:39:43.353573  488406 ops.go:34] apiserver oom_adj: -16
	I1101 10:39:43.353722  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:43.853846  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:44.354196  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:44.854222  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:45.354715  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:45.854560  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:46.353817  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:46.853840  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:47.354140  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:47.854188  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:48.146703  488406 kubeadm.go:1114] duration metric: took 5.053263737s to wait for elevateKubeSystemPrivileges
	I1101 10:39:48.146735  488406 kubeadm.go:403] duration metric: took 27.936831611s to StartCluster
	I1101 10:39:48.146762  488406 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:48.146825  488406 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:39:48.147857  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:48.148075  488406 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:39:48.148210  488406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:39:48.148448  488406 config.go:182] Loaded profile config "auto-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:48.148428  488406 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:39:48.148549  488406 addons.go:70] Setting storage-provisioner=true in profile "auto-220636"
	I1101 10:39:48.148565  488406 addons.go:239] Setting addon storage-provisioner=true in "auto-220636"
	I1101 10:39:48.148590  488406 host.go:66] Checking if "auto-220636" exists ...
	I1101 10:39:48.149078  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:48.149396  488406 addons.go:70] Setting default-storageclass=true in profile "auto-220636"
	I1101 10:39:48.149434  488406 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-220636"
	I1101 10:39:48.149813  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:48.153898  488406 out.go:179] * Verifying Kubernetes components...
	I1101 10:39:48.157497  488406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:48.197070  488406 addons.go:239] Setting addon default-storageclass=true in "auto-220636"
	I1101 10:39:48.197113  488406 host.go:66] Checking if "auto-220636" exists ...
	I1101 10:39:48.197542  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:48.210824  488406 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1101 10:39:45.350126  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:47.353389  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	I1101 10:39:48.213941  488406 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:39:48.213964  488406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:39:48.214036  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:48.248273  488406 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:39:48.248294  488406 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:39:48.248354  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:48.262554  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:48.289586  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:48.518968  488406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:39:48.670402  488406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:39:48.867079  488406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:39:48.872326  488406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:39:49.656818  488406 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.137817224s)
	I1101 10:39:49.656847  488406 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:39:49.658838  488406 node_ready.go:35] waiting up to 15m0s for node "auto-220636" to be "Ready" ...
	I1101 10:39:50.107159  488406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.234795182s)
	I1101 10:39:50.110634  488406 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 10:39:50.113867  488406 addons.go:515] duration metric: took 1.965430944s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 10:39:50.163639  488406 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-220636" context rescaled to 1 replicas
	W1101 10:39:49.850950  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:52.344640  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:51.662795  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:39:54.162138  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:39:54.843824  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:56.845146  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:59.344491  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:56.162610  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:39:58.661860  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:01.843637  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:40:03.843730  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:40:00.662748  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:03.161824  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:05.162292  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:06.343918  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	I1101 10:40:07.344556  489608 pod_ready.go:94] pod "coredns-66bc5c9577-h2552" is "Ready"
	I1101 10:40:07.344584  489608 pod_ready.go:86] duration metric: took 31.006139856s for pod "coredns-66bc5c9577-h2552" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.346874  489608 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.351329  489608 pod_ready.go:94] pod "etcd-default-k8s-diff-port-245904" is "Ready"
	I1101 10:40:07.351355  489608 pod_ready.go:86] duration metric: took 4.451377ms for pod "etcd-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.354149  489608 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.362799  489608 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-245904" is "Ready"
	I1101 10:40:07.362837  489608 pod_ready.go:86] duration metric: took 8.663284ms for pod "kube-apiserver-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.365375  489608 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.542547  489608 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-245904" is "Ready"
	I1101 10:40:07.542583  489608 pod_ready.go:86] duration metric: took 177.182885ms for pod "kube-controller-manager-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.744233  489608 pod_ready.go:83] waiting for pod "kube-proxy-8d8hl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:08.142528  489608 pod_ready.go:94] pod "kube-proxy-8d8hl" is "Ready"
	I1101 10:40:08.142556  489608 pod_ready.go:86] duration metric: took 398.296899ms for pod "kube-proxy-8d8hl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:08.342891  489608 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:08.744790  489608 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-245904" is "Ready"
	I1101 10:40:08.744869  489608 pod_ready.go:86] duration metric: took 401.949244ms for pod "kube-scheduler-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:08.744900  489608 pod_ready.go:40] duration metric: took 32.410667664s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:40:08.803504  489608 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:40:08.806726  489608 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-245904" cluster and "default" namespace by default
	W1101 10:40:07.661516  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:09.662565  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:12.162223  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:14.162290  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:16.662072  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:19.161910  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.643380027Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a69a5a79-06b8-4be5-8959-811c57d66c55 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.644633383Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3077858-ce90-4bee-b920-4bbc4a566c31 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.645633499Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh/dashboard-metrics-scraper" id=c3aeeb2e-14b2-4bce-b722-6327fcc5812c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.645796144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.655831431Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.656553692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.672590824Z" level=info msg="Created container 354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh/dashboard-metrics-scraper" id=c3aeeb2e-14b2-4bce-b722-6327fcc5812c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.676903253Z" level=info msg="Starting container: 354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5" id=c072d766-f358-4a2b-bacb-c53afd6db573 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.679457168Z" level=info msg="Started container" PID=1681 containerID=354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh/dashboard-metrics-scraper id=c072d766-f358-4a2b-bacb-c53afd6db573 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8b73ba8303ffb1c4480ae72c741a9f8e1d960bc240535aafacf3b5b710c8609
	Nov 01 10:40:13 default-k8s-diff-port-245904 conmon[1679]: conmon 354e6c29f4ba8d02bcc9 <ninfo>: container 1681 exited with status 1
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.057396175Z" level=info msg="Removing container: c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1" id=d239d791-11a1-4ed5-a7af-a7f53044723d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.06518009Z" level=info msg="Error loading conmon cgroup of container c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1: cgroup deleted" id=d239d791-11a1-4ed5-a7af-a7f53044723d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.069461971Z" level=info msg="Removed container c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh/dashboard-metrics-scraper" id=d239d791-11a1-4ed5-a7af-a7f53044723d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.777253165Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.781814845Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.781858136Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.781882465Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.785551871Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.785621501Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.785648792Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.789292704Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.789333706Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.789432793Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.794024414Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.794062856Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	354e6c29f4ba8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   2                   d8b73ba8303ff       dashboard-metrics-scraper-6ffb444bf9-gl8hh             kubernetes-dashboard
	d46a0edeb9401       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago       Running             storage-provisioner         2                   8f4fc819c76e5       storage-provisioner                                    kube-system
	4e1c18e366f01       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   244ee22402079       kubernetes-dashboard-855c9754f9-l727q                  kubernetes-dashboard
	98c11ffd4d3f9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago       Running             coredns                     1                   cdda822eb1b64       coredns-66bc5c9577-h2552                               kube-system
	f8a20eb3878fb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago       Running             kube-proxy                  1                   339a0d738e46c       kube-proxy-8d8hl                                       kube-system
	b7b00512262ae       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago       Running             kindnet-cni                 1                   ba9718875aa11       kindnet-5xtxk                                          kube-system
	b839606527a0b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago       Exited              storage-provisioner         1                   8f4fc819c76e5       storage-provisioner                                    kube-system
	3cb40663dbe09       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago       Running             busybox                     1                   55d43120cedb8       busybox                                                default
	d782666800538       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   911683cffce8e       kube-apiserver-default-k8s-diff-port-245904            kube-system
	f9910db4dfdda       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   dacd1b00f3201       kube-controller-manager-default-k8s-diff-port-245904   kube-system
	9cfafd062ccb4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   58b5306c33908       kube-scheduler-default-k8s-diff-port-245904            kube-system
	30e834d8a77dc       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   c67a1944f6b69       etcd-default-k8s-diff-port-245904                      kube-system
	
	
	==> coredns [98c11ffd4d3f91309c84aba212eabefcb80ccd370b1c392fdbd639ef33c9cf14] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60276 - 46647 "HINFO IN 840714791110925119.7241798148311781223. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021753219s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-245904
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-245904
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=default-k8s-diff-port-245904
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_38_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-245904
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:40:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:40:13 +0000   Sat, 01 Nov 2025 10:37:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:40:13 +0000   Sat, 01 Nov 2025 10:37:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:40:13 +0000   Sat, 01 Nov 2025 10:37:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:40:13 +0000   Sat, 01 Nov 2025 10:38:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-245904
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                50f868bb-abe9-4a86-b184-01355addeabf
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-h2552                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-default-k8s-diff-port-245904                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-5xtxk                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-245904             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-245904    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-8d8hl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-default-k8s-diff-port-245904             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gl8hh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-l727q                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 46s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m20s                  node-controller  Node default-k8s-diff-port-245904 event: Registered Node default-k8s-diff-port-245904 in Controller
	  Normal   NodeReady                98s                    kubelet          Node default-k8s-diff-port-245904 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           45s                    node-controller  Node default-k8s-diff-port-245904 event: Registered Node default-k8s-diff-port-245904 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	[ +26.122524] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[  +9.289237] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [30e834d8a77dcb064a27c0c12896c576a1ecda9002b655df2d47b3c124e33ac2] <==
	{"level":"warn","ts":"2025-11-01T10:39:28.461805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.490586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.547442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.585903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.612696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.685510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.765520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.883767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.946630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.069761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.078680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.174065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.205036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.269074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.338010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.375782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.417996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.482918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.525435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.571243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.779851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:32.924794Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.969175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:public-info-viewer\" limit:1 ","response":"range_response_count:1 size:613"}
	{"level":"info","ts":"2025-11-01T10:39:32.924858Z","caller":"traceutil/trace.go:172","msg":"trace[1552577361] range","detail":"{range_begin:/registry/clusterroles/system:public-info-viewer; range_end:; response_count:1; response_revision:497; }","duration":"119.048257ms","start":"2025-11-01T10:39:32.805796Z","end":"2025-11-01T10:39:32.924844Z","steps":["trace[1552577361] 'agreement among raft nodes before linearized reading'  (duration: 21.851914ms)","trace[1552577361] 'range keys from in-memory index tree'  (duration: 97.039303ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:39:32.925420Z","caller":"traceutil/trace.go:172","msg":"trace[1477835500] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"119.822761ms","start":"2025-11-01T10:39:32.805583Z","end":"2025-11-01T10:39:32.925406Z","steps":["trace[1477835500] 'process raft request'  (duration: 22.142232ms)","trace[1477835500] 'compare'  (duration: 97.271841ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:39:33.955085Z","caller":"traceutil/trace.go:172","msg":"trace[1971129672] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"196.901711ms","start":"2025-11-01T10:39:33.758165Z","end":"2025-11-01T10:39:33.955067Z","steps":["trace[1971129672] 'process raft request'  (duration: 165.622001ms)","trace[1971129672] 'compare'  (duration: 31.129061ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:40:23 up  2:22,  0 user,  load average: 3.70, 4.08, 3.38
	Linux default-k8s-diff-port-245904 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b7b00512262aea3dcc035878abe865da07ea524a984e03217db4298decd3413f] <==
	I1101 10:39:34.443288       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:39:34.443518       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:39:34.443650       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:39:34.443661       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:39:34.443674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:39:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:39:34.776390       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:39:34.776464       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:39:34.776499       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:39:34.821978       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:40:04.776687       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:40:04.823283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:40:04.823285       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:40:04.823483       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:40:06.223451       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:40:06.223484       1 metrics.go:72] Registering metrics
	I1101 10:40:06.223536       1 controller.go:711] "Syncing nftables rules"
	I1101 10:40:14.776843       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:40:14.776939       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d782666800538b469e418a5f838868b74612a893a1e3a0765dd3ca1190d13821] <==
	I1101 10:39:32.242033       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:39:32.259480       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:39:32.260013       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:39:32.260036       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:39:32.260045       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:39:32.260051       1 cache.go:39] Caches are synced for autoregister controller
	E1101 10:39:32.283685       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:39:32.293877       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:39:32.294003       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:39:32.303350       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:39:32.316720       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:39:32.317000       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:39:32.338999       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:39:32.375788       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:39:32.472374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:39:32.724734       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:39:34.072645       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:39:34.663839       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:39:35.032082       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:39:35.210730       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:39:35.657450       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.33.67"}
	I1101 10:39:35.720028       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.209.79"}
	I1101 10:39:38.095608       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:39:38.342380       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:39:38.488736       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f9910db4dfddad6c3e5a4f8b750b121b8871d21bdf0d44561df2a5718b2e3e39] <==
	I1101 10:39:38.022378       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:39:38.022499       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-245904"
	I1101 10:39:38.022554       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:39:38.012790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:39:38.022611       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:39:38.022618       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:39:38.012747       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:39:37.981893       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:39:38.012833       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:39:38.032267       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:39:38.032394       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:39:38.032541       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:39:38.042628       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:39:37.981879       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:39:38.045550       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:39:38.012473       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:39:38.012762       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:39:38.012801       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:39:38.012809       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:39:38.012824       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:39:38.012841       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:39:38.077629       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:39:38.077764       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:39:38.080114       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:39:38.080334       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [f8a20eb3878fb74917aa7efd04e8592e15bb898b2148768ed94f97fa6c1e0aff] <==
	I1101 10:39:36.104815       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:39:36.314812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:39:36.421582       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:39:36.421657       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:39:36.421800       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:39:36.928716       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:39:36.928833       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:39:36.950976       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:39:36.951400       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:39:36.951626       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:39:36.952934       1 config.go:200] "Starting service config controller"
	I1101 10:39:36.953003       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:39:36.953057       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:39:36.953100       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:39:36.953148       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:39:36.953186       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:39:36.953899       1 config.go:309] "Starting node config controller"
	I1101 10:39:36.953975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:39:36.954008       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:39:37.054368       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:39:37.054369       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:39:37.054405       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9cfafd062ccb475a6b1b6b434b2b13c9f646113eeda200d84df703684661e573] <==
	I1101 10:39:27.175255       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:39:36.700629       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:39:36.700868       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:39:36.721622       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:39:36.722986       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:39:36.723056       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:39:36.723109       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:39:36.723935       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:39:36.724008       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:39:36.724093       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:39:36.724126       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:39:36.823841       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:39:36.825333       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:39:36.825428       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:39:38 default-k8s-diff-port-245904 kubelet[783]: E1101 10:39:38.502849     783 status_manager.go:1018] "Failed to get status for pod" err="pods \"dashboard-metrics-scraper-6ffb444bf9-gl8hh\" is forbidden: User \"system:node:default-k8s-diff-port-245904\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'default-k8s-diff-port-245904' and this object" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh"
	Nov 01 10:39:38 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:38.639237     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b29821b8-c8ed-4661-be4e-54b3ffcd852b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-l727q\" (UID: \"b29821b8-c8ed-4661-be4e-54b3ffcd852b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l727q"
	Nov 01 10:39:38 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:38.639309     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjz5c\" (UniqueName: \"kubernetes.io/projected/ed252192-818e-45b5-82a4-86dd6cb408b9-kube-api-access-jjz5c\") pod \"dashboard-metrics-scraper-6ffb444bf9-gl8hh\" (UID: \"ed252192-818e-45b5-82a4-86dd6cb408b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh"
	Nov 01 10:39:38 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:38.639338     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ed252192-818e-45b5-82a4-86dd6cb408b9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gl8hh\" (UID: \"ed252192-818e-45b5-82a4-86dd6cb408b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh"
	Nov 01 10:39:38 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:38.639356     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvvk7\" (UniqueName: \"kubernetes.io/projected/b29821b8-c8ed-4661-be4e-54b3ffcd852b-kube-api-access-vvvk7\") pod \"kubernetes-dashboard-855c9754f9-l727q\" (UID: \"b29821b8-c8ed-4661-be4e-54b3ffcd852b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l727q"
	Nov 01 10:39:39 default-k8s-diff-port-245904 kubelet[783]: W1101 10:39:39.778556     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/crio-244ee2240207915c67d54df3d42c89ac8b45d65cbc4307e8cf776711c0d55449 WatchSource:0}: Error finding container 244ee2240207915c67d54df3d42c89ac8b45d65cbc4307e8cf776711c0d55449: Status 404 returned error can't find the container with id 244ee2240207915c67d54df3d42c89ac8b45d65cbc4307e8cf776711c0d55449
	Nov 01 10:39:52 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:52.986885     783 scope.go:117] "RemoveContainer" containerID="f5bccd49a2305d3009a385e5b58d31dcbd715f902727659225f510543796928e"
	Nov 01 10:39:53 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:53.018440     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l727q" podStartSLOduration=8.540587758000001 podStartE2EDuration="15.01842248s" podCreationTimestamp="2025-11-01 10:39:38 +0000 UTC" firstStartedPulling="2025-11-01 10:39:39.790315131 +0000 UTC m=+17.394798322" lastFinishedPulling="2025-11-01 10:39:46.268149853 +0000 UTC m=+23.872633044" observedRunningTime="2025-11-01 10:39:46.989518763 +0000 UTC m=+24.594001971" watchObservedRunningTime="2025-11-01 10:39:53.01842248 +0000 UTC m=+30.622905679"
	Nov 01 10:39:53 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:53.991477     783 scope.go:117] "RemoveContainer" containerID="f5bccd49a2305d3009a385e5b58d31dcbd715f902727659225f510543796928e"
	Nov 01 10:39:53 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:53.992387     783 scope.go:117] "RemoveContainer" containerID="c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1"
	Nov 01 10:39:53 default-k8s-diff-port-245904 kubelet[783]: E1101 10:39:53.992651     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gl8hh_kubernetes-dashboard(ed252192-818e-45b5-82a4-86dd6cb408b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9"
	Nov 01 10:39:54 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:54.995764     783 scope.go:117] "RemoveContainer" containerID="c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1"
	Nov 01 10:39:54 default-k8s-diff-port-245904 kubelet[783]: E1101 10:39:54.995957     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gl8hh_kubernetes-dashboard(ed252192-818e-45b5-82a4-86dd6cb408b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9"
	Nov 01 10:39:59 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:59.683388     783 scope.go:117] "RemoveContainer" containerID="c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1"
	Nov 01 10:39:59 default-k8s-diff-port-245904 kubelet[783]: E1101 10:39:59.683606     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gl8hh_kubernetes-dashboard(ed252192-818e-45b5-82a4-86dd6cb408b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9"
	Nov 01 10:40:06 default-k8s-diff-port-245904 kubelet[783]: I1101 10:40:06.030242     783 scope.go:117] "RemoveContainer" containerID="b839606527a0b636e484040e6f65caadbe27fa5fd6f705b9d1a78d038a9ccdac"
	Nov 01 10:40:13 default-k8s-diff-port-245904 kubelet[783]: I1101 10:40:13.642828     783 scope.go:117] "RemoveContainer" containerID="c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1"
	Nov 01 10:40:14 default-k8s-diff-port-245904 kubelet[783]: I1101 10:40:14.054552     783 scope.go:117] "RemoveContainer" containerID="c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1"
	Nov 01 10:40:14 default-k8s-diff-port-245904 kubelet[783]: I1101 10:40:14.055360     783 scope.go:117] "RemoveContainer" containerID="354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5"
	Nov 01 10:40:14 default-k8s-diff-port-245904 kubelet[783]: E1101 10:40:14.055689     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gl8hh_kubernetes-dashboard(ed252192-818e-45b5-82a4-86dd6cb408b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9"
	Nov 01 10:40:19 default-k8s-diff-port-245904 kubelet[783]: I1101 10:40:19.682535     783 scope.go:117] "RemoveContainer" containerID="354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5"
	Nov 01 10:40:19 default-k8s-diff-port-245904 kubelet[783]: E1101 10:40:19.683220     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gl8hh_kubernetes-dashboard(ed252192-818e-45b5-82a4-86dd6cb408b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9"
	Nov 01 10:40:21 default-k8s-diff-port-245904 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:40:21 default-k8s-diff-port-245904 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:40:21 default-k8s-diff-port-245904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4e1c18e366f011597bd4500e494e129d7e239722c028290b019581f02af5459f] <==
	2025/11/01 10:39:46 Starting overwatch
	2025/11/01 10:39:46 Using namespace: kubernetes-dashboard
	2025/11/01 10:39:46 Using in-cluster config to connect to apiserver
	2025/11/01 10:39:46 Using secret token for csrf signing
	2025/11/01 10:39:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:39:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:39:46 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:39:46 Generating JWE encryption key
	2025/11/01 10:39:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:39:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:39:47 Initializing JWE encryption key from synchronized object
	2025/11/01 10:39:47 Creating in-cluster Sidecar client
	2025/11/01 10:39:47 Serving insecurely on HTTP port: 9090
	2025/11/01 10:39:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:40:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b839606527a0b636e484040e6f65caadbe27fa5fd6f705b9d1a78d038a9ccdac] <==
	I1101 10:39:35.298351       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:40:05.520740       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d46a0edeb94014e2b6de899870e120c1e9663026c65d0bae3809f3f4a5097fd4] <==
	I1101 10:40:06.077935       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:40:06.092658       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:40:06.092791       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:40:06.095096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:09.549850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:13.810270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:17.408858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:20.468625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:23.490761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:23.502844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:40:23.504774       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e194f99-8f93-4855-b159-998a98b1e129", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-245904_e3b2b290-6204-4191-bc9c-12b4a7fe5bf8 became leader
	I1101 10:40:23.507535       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:40:23.507758       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-245904_e3b2b290-6204-4191-bc9c-12b4a7fe5bf8!
	W1101 10:40:23.522536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:23.527280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:40:23.608682       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-245904_e3b2b290-6204-4191-bc9c-12b4a7fe5bf8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904: exit status 2 (392.444949ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-245904 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-245904
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-245904:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e",
	        "Created": "2025-11-01T10:37:31.035014069Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489736,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:39:14.832090165Z",
	            "FinishedAt": "2025-11-01T10:39:14.002746879Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/hosts",
	        "LogPath": "/var/lib/docker/containers/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e-json.log",
	        "Name": "/default-k8s-diff-port-245904",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-245904:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-245904",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e",
	                "LowerDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0-init/diff:/var/lib/docker/overlay2/0562d39e149b0799803614f22e14b751c94aa15c79abfad32d471de6bcd99e53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56db1c30c3d2d89abb3ac6faef25516572230fcd0f879581fd368780eca68aa0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-245904",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-245904/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-245904",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-245904",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-245904",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43509f8221a7cd70d36ba1dbdcc428a50956b78274ed1b4d20546c06da2fb41e",
	            "SandboxKey": "/var/run/docker/netns/43509f8221a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-245904": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:bb:2c:7b:fc:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ca453ec076d50791763a6c741bc9e74267d64bf587acdd7076e49fdbf14831b1",
	                    "EndpointID": "eb9790fe5ce71d770e0adad2bf1fa0cace1caeebd9dab0efaf2474778ad41386",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-245904",
	                        "a7be6b4a2a88"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904: exit status 2 (372.41915ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-245904 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-245904 logs -n 25: (1.309420503s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-170467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p no-preload-170467                                                                                                                                                                                                                          │ no-preload-170467            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p disable-driver-mounts-416512                                                                                                                                                                                                               │ disable-driver-mounts-416512 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ image   │ embed-certs-618070 image list --format=json                                                                                                                                                                                                   │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-618070 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-618070                                                                                                                                                                                                                         │ embed-certs-618070           │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:37 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p newest-cni-761749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ stop    │ -p newest-cni-761749 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-761749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ start   │ -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ image   │ newest-cni-761749 image list --format=json                                                                                                                                                                                                    │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │ 01 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-761749 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-245904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:38 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-245904 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ delete  │ -p newest-cni-761749                                                                                                                                                                                                                          │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ delete  │ -p newest-cni-761749                                                                                                                                                                                                                          │ newest-cni-761749            │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ start   │ -p auto-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-220636                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-245904 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:39 UTC │
	│ start   │ -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:39 UTC │ 01 Nov 25 10:40 UTC │
	│ image   │ default-k8s-diff-port-245904 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ pause   │ -p default-k8s-diff-port-245904 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-245904 │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:39:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:39:14.554230  489608 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:39:14.554611  489608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:39:14.554647  489608 out.go:374] Setting ErrFile to fd 2...
	I1101 10:39:14.554667  489608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:39:14.554965  489608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:39:14.555387  489608 out.go:368] Setting JSON to false
	I1101 10:39:14.556303  489608 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8504,"bootTime":1761985051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:39:14.556402  489608 start.go:143] virtualization:  
	I1101 10:39:14.559114  489608 out.go:179] * [default-k8s-diff-port-245904] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:39:14.563072  489608 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:39:14.563188  489608 notify.go:221] Checking for updates...
	I1101 10:39:14.569090  489608 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:39:14.572153  489608 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:39:14.575046  489608 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:39:14.577866  489608 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:39:14.580680  489608 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:39:14.584036  489608 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:14.584578  489608 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:39:14.614923  489608 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:39:14.615056  489608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:39:14.670936  489608 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:39:14.661754965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:39:14.671052  489608 docker.go:319] overlay module found
	I1101 10:39:14.674304  489608 out.go:179] * Using the docker driver based on existing profile
	I1101 10:39:14.677137  489608 start.go:309] selected driver: docker
	I1101 10:39:14.677156  489608 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:39:14.677242  489608 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:39:14.678136  489608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:39:14.743793  489608 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:39:14.723968749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:39:14.744146  489608 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:39:14.744184  489608 cni.go:84] Creating CNI manager for ""
	I1101 10:39:14.744247  489608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:39:14.744291  489608 start.go:353] cluster config:
	{Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:39:14.747560  489608 out.go:179] * Starting "default-k8s-diff-port-245904" primary control-plane node in "default-k8s-diff-port-245904" cluster
	I1101 10:39:14.750445  489608 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:39:14.753508  489608 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:39:14.756409  489608 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:39:14.756477  489608 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:39:14.756488  489608 cache.go:59] Caching tarball of preloaded images
	I1101 10:39:14.756514  489608 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:39:14.756592  489608 preload.go:233] Found /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:39:14.756603  489608 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:39:14.756724  489608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/config.json ...
	I1101 10:39:14.776988  489608 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:39:14.777013  489608 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:39:14.777028  489608 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:39:14.777055  489608 start.go:360] acquireMachinesLock for default-k8s-diff-port-245904: {Name:mkd19cff2a35f3bd59a365809e4cb064a7918a80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:39:14.777121  489608 start.go:364] duration metric: took 38.36µs to acquireMachinesLock for "default-k8s-diff-port-245904"
	I1101 10:39:14.777148  489608 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:39:14.777157  489608 fix.go:54] fixHost starting: 
	I1101 10:39:14.777424  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:14.795225  489608 fix.go:112] recreateIfNeeded on default-k8s-diff-port-245904: state=Stopped err=<nil>
	W1101 10:39:14.795257  489608 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:39:10.826171  488406 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-220636:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.578274612s)
	I1101 10:39:10.826202  488406 kic.go:203] duration metric: took 4.578407841s to extract preloaded images to volume ...
	W1101 10:39:10.826343  488406 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:39:10.826460  488406 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:39:10.883366  488406 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-220636 --name auto-220636 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-220636 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-220636 --network auto-220636 --ip 192.168.85.2 --volume auto-220636:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:39:11.193752  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Running}}
	I1101 10:39:11.217780  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:11.242360  488406 cli_runner.go:164] Run: docker exec auto-220636 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:39:11.293531  488406 oci.go:144] the created container "auto-220636" has a running status.
	I1101 10:39:11.293560  488406 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa...
	I1101 10:39:11.898308  488406 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:39:11.920583  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:11.937491  488406 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:39:11.937510  488406 kic_runner.go:114] Args: [docker exec --privileged auto-220636 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:39:11.977405  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:11.995291  488406 machine.go:94] provisionDockerMachine start ...
	I1101 10:39:11.995399  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:12.013670  488406 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:12.014043  488406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1101 10:39:12.014064  488406 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:39:12.014783  488406 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:39:15.201830  488406 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-220636
	
	I1101 10:39:15.201861  488406 ubuntu.go:182] provisioning hostname "auto-220636"
	I1101 10:39:15.201927  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:15.227360  488406 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:15.227684  488406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1101 10:39:15.227703  488406 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-220636 && echo "auto-220636" | sudo tee /etc/hostname
	I1101 10:39:15.422162  488406 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-220636
	
	I1101 10:39:15.422263  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:15.451154  488406 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:15.451485  488406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1101 10:39:15.451509  488406 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-220636' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-220636/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-220636' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:39:15.629038  488406 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:39:15.629069  488406 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:39:15.629089  488406 ubuntu.go:190] setting up certificates
	I1101 10:39:15.629099  488406 provision.go:84] configureAuth start
	I1101 10:39:15.629160  488406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-220636
	I1101 10:39:15.660825  488406 provision.go:143] copyHostCerts
	I1101 10:39:15.660886  488406 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:39:15.660898  488406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:39:15.660968  488406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:39:15.661064  488406 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:39:15.661080  488406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:39:15.661108  488406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:39:15.661179  488406 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:39:15.661189  488406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:39:15.661217  488406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:39:15.661281  488406 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.auto-220636 san=[127.0.0.1 192.168.85.2 auto-220636 localhost minikube]
	I1101 10:39:15.926962  488406 provision.go:177] copyRemoteCerts
	I1101 10:39:15.927095  488406 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:39:15.927160  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:15.944207  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:16.050866  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:39:16.070966  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:39:16.090157  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:39:16.109457  488406 provision.go:87] duration metric: took 480.33261ms to configureAuth
	I1101 10:39:16.109528  488406 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:39:16.109845  488406 config.go:182] Loaded profile config "auto-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:16.109970  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:16.127653  488406 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:16.127969  488406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1101 10:39:16.127989  488406 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:39:16.387681  488406 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:39:16.387715  488406 machine.go:97] duration metric: took 4.392395135s to provisionDockerMachine
	I1101 10:39:16.387726  488406 client.go:176] duration metric: took 10.832974346s to LocalClient.Create
	I1101 10:39:16.387739  488406 start.go:167] duration metric: took 10.833041728s to libmachine.API.Create "auto-220636"
	I1101 10:39:16.387746  488406 start.go:293] postStartSetup for "auto-220636" (driver="docker")
	I1101 10:39:16.387761  488406 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:39:16.387823  488406 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:39:16.387865  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:16.406181  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:16.514067  488406 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:39:16.517569  488406 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:39:16.517599  488406 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:39:16.517610  488406 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:39:16.517682  488406 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:39:16.517816  488406 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:39:16.517931  488406 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:39:16.525474  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:39:16.544713  488406 start.go:296] duration metric: took 156.951232ms for postStartSetup
	I1101 10:39:16.545078  488406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-220636
	I1101 10:39:16.567817  488406 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/config.json ...
	I1101 10:39:16.568098  488406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:39:16.568141  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:16.585290  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:16.686761  488406 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:39:16.691236  488406 start.go:128] duration metric: took 11.140245372s to createHost
	I1101 10:39:16.691259  488406 start.go:83] releasing machines lock for "auto-220636", held for 11.140379463s
	I1101 10:39:16.691337  488406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-220636
	I1101 10:39:16.708386  488406 ssh_runner.go:195] Run: cat /version.json
	I1101 10:39:16.708449  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:16.708541  488406 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:39:16.708599  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:16.728958  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:16.738886  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:16.833505  488406 ssh_runner.go:195] Run: systemctl --version
	I1101 10:39:16.957761  488406 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:39:16.994506  488406 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:39:16.999001  488406 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:39:16.999149  488406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:39:17.028653  488406 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:39:17.028675  488406 start.go:496] detecting cgroup driver to use...
	I1101 10:39:17.028707  488406 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:39:17.028756  488406 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:39:17.047641  488406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:39:17.060827  488406 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:39:17.060931  488406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:39:17.078861  488406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:39:17.098245  488406 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:39:17.219128  488406 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:39:17.338425  488406 docker.go:234] disabling docker service ...
	I1101 10:39:17.338497  488406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:39:17.359788  488406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:39:17.373387  488406 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:39:17.497970  488406 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:39:17.620484  488406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:39:17.633447  488406 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:39:17.647602  488406 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:39:17.647669  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.656611  488406 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:39:17.656711  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.666009  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.674725  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.683730  488406 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:39:17.691979  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.701018  488406 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.714896  488406 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:17.724133  488406 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:39:17.732311  488406 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:39:17.740030  488406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:17.857738  488406 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:39:17.982190  488406 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:39:17.982309  488406 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:39:17.986567  488406 start.go:564] Will wait 60s for crictl version
	I1101 10:39:17.986639  488406 ssh_runner.go:195] Run: which crictl
	I1101 10:39:17.990288  488406 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:39:18.021331  488406 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:39:18.021496  488406 ssh_runner.go:195] Run: crio --version
	I1101 10:39:18.049433  488406 ssh_runner.go:195] Run: crio --version
	I1101 10:39:18.087950  488406 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:39:14.798658  489608 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-245904" ...
	I1101 10:39:14.798750  489608 cli_runner.go:164] Run: docker start default-k8s-diff-port-245904
	I1101 10:39:15.110369  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:15.135998  489608 kic.go:430] container "default-k8s-diff-port-245904" state is running.
	I1101 10:39:15.136406  489608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:39:15.166813  489608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/config.json ...
	I1101 10:39:15.167069  489608 machine.go:94] provisionDockerMachine start ...
	I1101 10:39:15.167132  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:15.186784  489608 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:15.187140  489608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1101 10:39:15.187157  489608 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:39:15.187873  489608 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:39:18.353625  489608 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-245904
	
	I1101 10:39:18.353666  489608 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-245904"
	I1101 10:39:18.353748  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:18.373041  489608 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:18.373341  489608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1101 10:39:18.373359  489608 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-245904 && echo "default-k8s-diff-port-245904" | sudo tee /etc/hostname
	I1101 10:39:18.578271  489608 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-245904
	
	I1101 10:39:18.578353  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:18.599691  489608 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:18.599990  489608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1101 10:39:18.600009  489608 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-245904' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-245904/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-245904' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:39:18.767431  489608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:39:18.767461  489608 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-285274/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-285274/.minikube}
	I1101 10:39:18.767498  489608 ubuntu.go:190] setting up certificates
	I1101 10:39:18.767524  489608 provision.go:84] configureAuth start
	I1101 10:39:18.767608  489608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:39:18.793853  489608 provision.go:143] copyHostCerts
	I1101 10:39:18.793943  489608 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem, removing ...
	I1101 10:39:18.793962  489608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem
	I1101 10:39:18.794050  489608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/ca.pem (1078 bytes)
	I1101 10:39:18.794168  489608 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem, removing ...
	I1101 10:39:18.794180  489608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem
	I1101 10:39:18.794212  489608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/cert.pem (1123 bytes)
	I1101 10:39:18.794288  489608 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem, removing ...
	I1101 10:39:18.794298  489608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem
	I1101 10:39:18.794330  489608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-285274/.minikube/key.pem (1679 bytes)
	I1101 10:39:18.794400  489608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-245904 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-245904 localhost minikube]
	I1101 10:39:19.325859  489608 provision.go:177] copyRemoteCerts
	I1101 10:39:19.325931  489608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:39:19.325994  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:19.344682  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:19.470641  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:39:19.490897  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 10:39:19.511301  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:39:19.531077  489608 provision.go:87] duration metric: took 763.5269ms to configureAuth
	I1101 10:39:19.531101  489608 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:39:19.531299  489608 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:19.531405  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:18.090876  488406 cli_runner.go:164] Run: docker network inspect auto-220636 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:39:18.107815  488406 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:39:18.111883  488406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:39:18.122236  488406 kubeadm.go:884] updating cluster {Name:auto-220636 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-220636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:39:18.122356  488406 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:39:18.122418  488406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:39:18.159986  488406 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:39:18.160010  488406 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:39:18.160068  488406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:39:18.185725  488406 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:39:18.185746  488406 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:39:18.185754  488406 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:39:18.185851  488406 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-220636 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-220636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:39:18.185930  488406 ssh_runner.go:195] Run: crio config
	I1101 10:39:18.263075  488406 cni.go:84] Creating CNI manager for ""
	I1101 10:39:18.263503  488406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:39:18.263524  488406 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:39:18.263579  488406 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-220636 NodeName:auto-220636 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:39:18.263729  488406 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-220636"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:39:18.263817  488406 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:39:18.274660  488406 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:39:18.274774  488406 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:39:18.284113  488406 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1101 10:39:18.299022  488406 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:39:18.314666  488406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1101 10:39:18.328719  488406 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:39:18.332446  488406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:39:18.342649  488406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:18.498815  488406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:39:18.515322  488406 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636 for IP: 192.168.85.2
	I1101 10:39:18.515344  488406 certs.go:195] generating shared ca certs ...
	I1101 10:39:18.515360  488406 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:18.515495  488406 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:39:18.515542  488406 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:39:18.515552  488406 certs.go:257] generating profile certs ...
	I1101 10:39:18.515607  488406 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.key
	I1101 10:39:18.515625  488406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt with IP's: []
	I1101 10:39:19.161666  488406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt ...
	I1101 10:39:19.161759  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: {Name:mk6431b3df0d248a167255a91e18586ae16b9974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.161992  488406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.key ...
	I1101 10:39:19.162033  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.key: {Name:mk593c24b085637d1e3004773d11fa7baec8761e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.162178  488406 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key.a5c9aff1
	I1101 10:39:19.162221  488406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt.a5c9aff1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:39:19.426859  488406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt.a5c9aff1 ...
	I1101 10:39:19.426895  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt.a5c9aff1: {Name:mk01906c5c93f94bf5ff3c4d19c73a9d57fb53d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.427137  488406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key.a5c9aff1 ...
	I1101 10:39:19.427155  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key.a5c9aff1: {Name:mk00bcf5f7d2853eb6eeaf5cecf8f0b4733f15b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.427264  488406 certs.go:382] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt.a5c9aff1 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt
	I1101 10:39:19.427355  488406 certs.go:386] copying /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key.a5c9aff1 -> /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key
	I1101 10:39:19.427417  488406 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.key
	I1101 10:39:19.427438  488406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.crt with IP's: []
	I1101 10:39:19.715157  488406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.crt ...
	I1101 10:39:19.715189  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.crt: {Name:mkcc5b12f0ed8ca4d8068df2908c316e1853316b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.715388  488406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.key ...
	I1101 10:39:19.715401  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.key: {Name:mk50898e43091d82395d7464c9b66369c615007c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:19.715600  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:39:19.715645  488406 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:39:19.715655  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:39:19.715679  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:39:19.715712  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:39:19.715734  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:39:19.715780  488406 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:39:19.716335  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:39:19.736287  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:39:19.756385  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:39:19.777609  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:39:19.797250  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1101 10:39:19.815788  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:39:19.833572  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:39:19.858377  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:39:19.879647  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:39:19.900190  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:39:19.922906  488406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:39:19.958198  488406 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:39:19.978221  488406 ssh_runner.go:195] Run: openssl version
	I1101 10:39:19.984508  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:39:19.993511  488406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:19.997469  488406 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:19.997536  488406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:20.040193  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:39:20.051060  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:39:20.060240  488406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:39:20.064365  488406 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:39:20.064429  488406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:39:20.108700  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:39:20.117675  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:39:20.126716  488406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:39:20.132686  488406 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:39:20.132754  488406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:39:20.192952  488406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:39:20.205890  488406 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:39:20.209851  488406 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:39:20.209908  488406 kubeadm.go:401] StartCluster: {Name:auto-220636 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-220636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:39:20.209990  488406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:39:20.210054  488406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:39:20.245999  488406 cri.go:89] found id: ""
	I1101 10:39:20.246076  488406 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:39:20.254336  488406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:39:20.262185  488406 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:39:20.262250  488406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:39:20.272341  488406 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:39:20.272355  488406 kubeadm.go:158] found existing configuration files:
	
	I1101 10:39:20.272393  488406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:39:20.283529  488406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:39:20.283597  488406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:39:20.292775  488406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:39:20.303228  488406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:39:20.303292  488406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:39:20.312334  488406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:39:20.329419  488406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:39:20.329486  488406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:39:20.339754  488406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:39:20.349136  488406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:39:20.349207  488406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:39:20.357952  488406 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:39:20.409348  488406 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:39:20.409717  488406 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:39:20.444121  488406 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:39:20.444249  488406 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:39:20.444308  488406 kubeadm.go:319] OS: Linux
	I1101 10:39:20.444382  488406 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:39:20.444464  488406 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:39:20.444545  488406 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:39:20.444629  488406 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:39:20.444710  488406 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:39:20.444788  488406 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:39:20.444855  488406 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:39:20.444916  488406 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:39:20.444971  488406 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:39:20.533056  488406 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:39:20.533172  488406 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:39:20.533268  488406 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:39:20.542396  488406 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:39:19.575799  489608 main.go:143] libmachine: Using SSH client type: native
	I1101 10:39:19.576200  489608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1101 10:39:19.576221  489608 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:39:19.945808  489608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:39:19.945835  489608 machine.go:97] duration metric: took 4.77875463s to provisionDockerMachine
	I1101 10:39:19.945847  489608 start.go:293] postStartSetup for "default-k8s-diff-port-245904" (driver="docker")
	I1101 10:39:19.945858  489608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:39:19.945930  489608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:39:19.945976  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:19.973071  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:20.087653  489608 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:39:20.092462  489608 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:39:20.092490  489608 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:39:20.092501  489608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/addons for local assets ...
	I1101 10:39:20.092560  489608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-285274/.minikube/files for local assets ...
	I1101 10:39:20.092648  489608 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem -> 2871352.pem in /etc/ssl/certs
	I1101 10:39:20.092756  489608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:39:20.102551  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:39:20.127559  489608 start.go:296] duration metric: took 181.699212ms for postStartSetup
	I1101 10:39:20.127674  489608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:39:20.127744  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:20.149790  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:20.272005  489608 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:39:20.279190  489608 fix.go:56] duration metric: took 5.502025419s for fixHost
	I1101 10:39:20.279212  489608 start.go:83] releasing machines lock for "default-k8s-diff-port-245904", held for 5.50207959s
	I1101 10:39:20.279277  489608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-245904
	I1101 10:39:20.298317  489608 ssh_runner.go:195] Run: cat /version.json
	I1101 10:39:20.298363  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:20.298585  489608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:39:20.298661  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:20.331251  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:20.337266  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:20.466384  489608 ssh_runner.go:195] Run: systemctl --version
	I1101 10:39:20.568402  489608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:39:20.619712  489608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:39:20.627022  489608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:39:20.627229  489608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:39:20.638856  489608 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:39:20.638948  489608 start.go:496] detecting cgroup driver to use...
	I1101 10:39:20.638994  489608 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:39:20.639089  489608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:39:20.660036  489608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:39:20.678635  489608 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:39:20.678770  489608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:39:20.700279  489608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:39:20.719091  489608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:39:20.868611  489608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:39:21.045986  489608 docker.go:234] disabling docker service ...
	I1101 10:39:21.046124  489608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:39:21.064377  489608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:39:21.079003  489608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:39:21.231745  489608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:39:21.378629  489608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:39:21.393044  489608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:39:21.408056  489608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:39:21.408135  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.417200  489608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:39:21.417268  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.426661  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.435911  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.446149  489608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:39:21.454784  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.463990  489608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.472878  489608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:39:21.481782  489608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:39:21.489499  489608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:39:21.497228  489608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:21.636576  489608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:39:21.801311  489608 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:39:21.801383  489608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:39:21.806064  489608 start.go:564] Will wait 60s for crictl version
	I1101 10:39:21.806214  489608 ssh_runner.go:195] Run: which crictl
	I1101 10:39:21.810712  489608 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:39:21.837052  489608 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:39:21.837200  489608 ssh_runner.go:195] Run: crio --version
	I1101 10:39:21.870707  489608 ssh_runner.go:195] Run: crio --version
	I1101 10:39:21.912339  489608 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:39:21.915332  489608 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-245904 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:39:21.938411  489608 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:39:21.942613  489608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:39:21.964445  489608 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:39:21.964591  489608 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:39:21.964644  489608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:39:22.031995  489608 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:39:22.032016  489608 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:39:22.032073  489608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:39:22.072189  489608 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:39:22.072211  489608 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:39:22.072219  489608 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1101 10:39:22.072315  489608 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-245904 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:39:22.072399  489608 ssh_runner.go:195] Run: crio config
	I1101 10:39:22.153290  489608 cni.go:84] Creating CNI manager for ""
	I1101 10:39:22.153353  489608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:39:22.153395  489608 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:39:22.153440  489608 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-245904 NodeName:default-k8s-diff-port-245904 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:39:22.153640  489608 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-245904"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:39:22.153766  489608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:39:22.162833  489608 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:39:22.162984  489608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:39:22.171645  489608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 10:39:22.189478  489608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:39:22.203837  489608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 10:39:22.218175  489608 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:39:22.222170  489608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:39:22.232152  489608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:22.373569  489608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:39:22.391488  489608 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904 for IP: 192.168.76.2
	I1101 10:39:22.391507  489608 certs.go:195] generating shared ca certs ...
	I1101 10:39:22.391523  489608 certs.go:227] acquiring lock for ca certs: {Name:mkf4087ba800a4d47f1a7b0baa48112f9a770038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:22.391658  489608 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key
	I1101 10:39:22.391703  489608 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key
	I1101 10:39:22.391715  489608 certs.go:257] generating profile certs ...
	I1101 10:39:22.391798  489608 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.key
	I1101 10:39:22.391867  489608 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key.52ff7e67
	I1101 10:39:22.391902  489608 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key
	I1101 10:39:22.392005  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem (1338 bytes)
	W1101 10:39:22.392031  489608 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135_empty.pem, impossibly tiny 0 bytes
	I1101 10:39:22.392039  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:39:22.392064  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:39:22.392084  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:39:22.392106  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/certs/key.pem (1679 bytes)
	I1101 10:39:22.392149  489608 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem (1708 bytes)
	I1101 10:39:22.392789  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:39:22.433518  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:39:22.507030  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:39:22.593190  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 10:39:22.622164  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 10:39:22.674540  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:39:22.700101  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:39:22.730753  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:39:22.763770  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/certs/287135.pem --> /usr/share/ca-certificates/287135.pem (1338 bytes)
	I1101 10:39:22.780376  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/ssl/certs/2871352.pem --> /usr/share/ca-certificates/2871352.pem (1708 bytes)
	I1101 10:39:22.798279  489608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:39:22.814978  489608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:39:22.827941  489608 ssh_runner.go:195] Run: openssl version
	I1101 10:39:22.835470  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/287135.pem && ln -fs /usr/share/ca-certificates/287135.pem /etc/ssl/certs/287135.pem"
	I1101 10:39:22.843733  489608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/287135.pem
	I1101 10:39:22.848843  489608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/287135.pem
	I1101 10:39:22.848925  489608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/287135.pem
	I1101 10:39:22.890271  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/287135.pem /etc/ssl/certs/51391683.0"
	I1101 10:39:22.898357  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2871352.pem && ln -fs /usr/share/ca-certificates/2871352.pem /etc/ssl/certs/2871352.pem"
	I1101 10:39:22.906697  489608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2871352.pem
	I1101 10:39:22.912662  489608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/2871352.pem
	I1101 10:39:22.912745  489608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2871352.pem
	I1101 10:39:22.956444  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2871352.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:39:22.964543  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:39:22.973548  489608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:22.978394  489608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:22.978476  489608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:39:23.021014  489608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:39:23.029931  489608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:39:23.035044  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:39:23.102988  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:39:23.176320  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:39:23.250702  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:39:23.348534  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:39:23.476116  489608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:39:23.561168  489608 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-245904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-245904 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:39:23.561265  489608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:39:23.561337  489608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:39:23.634963  489608 cri.go:89] found id: "d782666800538b469e418a5f838868b74612a893a1e3a0765dd3ca1190d13821"
	I1101 10:39:23.634987  489608 cri.go:89] found id: "f9910db4dfddad6c3e5a4f8b750b121b8871d21bdf0d44561df2a5718b2e3e39"
	I1101 10:39:23.635000  489608 cri.go:89] found id: "9cfafd062ccb475a6b1b6b434b2b13c9f646113eeda200d84df703684661e573"
	I1101 10:39:23.635004  489608 cri.go:89] found id: "30e834d8a77dcb064a27c0c12896c576a1ecda9002b655df2d47b3c124e33ac2"
	I1101 10:39:23.635008  489608 cri.go:89] found id: ""
	I1101 10:39:23.635074  489608 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:39:23.671705  489608 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:39:23Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:39:23.671820  489608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:39:23.702003  489608 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:39:23.702024  489608 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:39:23.702124  489608 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:39:23.714299  489608 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:39:23.714806  489608 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-245904" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:39:23.714921  489608 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-285274/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-245904" cluster setting kubeconfig missing "default-k8s-diff-port-245904" context setting]
	I1101 10:39:23.715248  489608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:23.718487  489608 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:39:23.732327  489608 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:39:23.732360  489608 kubeadm.go:602] duration metric: took 30.329927ms to restartPrimaryControlPlane
	I1101 10:39:23.732370  489608 kubeadm.go:403] duration metric: took 171.21342ms to StartCluster
	I1101 10:39:23.732386  489608 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:23.732456  489608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:39:23.733138  489608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:23.733383  489608 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:39:23.733726  489608 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:39:23.733801  489608 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-245904"
	I1101 10:39:23.733818  489608 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-245904"
	W1101 10:39:23.733823  489608 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:39:23.733845  489608 host.go:66] Checking if "default-k8s-diff-port-245904" exists ...
	I1101 10:39:23.734306  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:23.734809  489608 config.go:182] Loaded profile config "default-k8s-diff-port-245904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:23.734900  489608 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-245904"
	I1101 10:39:23.734925  489608 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-245904"
	W1101 10:39:23.734944  489608 addons.go:248] addon dashboard should already be in state true
	I1101 10:39:23.734988  489608 host.go:66] Checking if "default-k8s-diff-port-245904" exists ...
	I1101 10:39:23.735476  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:23.737783  489608 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-245904"
	I1101 10:39:23.737805  489608 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-245904"
	I1101 10:39:23.738090  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:23.740585  489608 out.go:179] * Verifying Kubernetes components...
	I1101 10:39:23.744131  489608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:23.786551  489608 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-245904"
	W1101 10:39:23.786573  489608 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:39:23.786597  489608 host.go:66] Checking if "default-k8s-diff-port-245904" exists ...
	I1101 10:39:23.787012  489608 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-245904 --format={{.State.Status}}
	I1101 10:39:23.802632  489608 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:39:23.802746  489608 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:39:23.805626  489608 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:39:23.805645  489608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:39:23.805725  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:23.809570  489608 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:39:23.819848  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:39:23.819882  489608 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:39:23.819989  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:23.836735  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:23.843506  489608 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:39:23.843529  489608 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:39:23.843590  489608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-245904
	I1101 10:39:23.872933  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:23.883741  489608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/default-k8s-diff-port-245904/id_rsa Username:docker}
	I1101 10:39:24.132699  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:39:24.132772  489608 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:39:24.249105  489608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:39:24.263422  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:39:24.263486  489608 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:39:24.304056  489608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:39:24.328572  489608 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-245904" to be "Ready" ...
	I1101 10:39:24.415051  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:39:24.415082  489608 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:39:24.416381  489608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:39:24.521244  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:39:24.521262  489608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:39:20.545935  488406 out.go:252]   - Generating certificates and keys ...
	I1101 10:39:20.546029  488406 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:39:20.546097  488406 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:39:20.888946  488406 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:39:22.323822  488406 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:39:22.562650  488406 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:39:23.465636  488406 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:39:24.618090  488406 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:39:24.618227  488406 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-220636 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:39:24.816201  488406 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:39:24.816342  488406 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-220636 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:39:24.750799  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:39:24.750827  489608 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:39:24.792237  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:39:24.792262  489608 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:39:24.842912  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:39:24.842936  489608 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:39:24.891553  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:39:24.891578  489608 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:39:24.930241  489608 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:39:24.930268  489608 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:39:24.966615  489608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:39:25.661808  488406 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:39:26.858144  488406 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:39:26.938012  488406 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:39:26.938089  488406 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:39:28.330033  488406 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:39:28.658066  488406 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:39:29.337446  488406 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:39:29.770063  488406 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:39:30.410055  488406 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:39:30.410157  488406 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:39:30.420820  488406 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:39:31.803500  489608 node_ready.go:49] node "default-k8s-diff-port-245904" is "Ready"
	I1101 10:39:31.803532  489608 node_ready.go:38] duration metric: took 7.474877486s for node "default-k8s-diff-port-245904" to be "Ready" ...
	I1101 10:39:31.803547  489608 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:39:31.803604  489608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:39:30.424284  488406 out.go:252]   - Booting up control plane ...
	I1101 10:39:30.424400  488406 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:39:30.424482  488406 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:39:30.424561  488406 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:39:30.462107  488406 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:39:30.462225  488406 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:39:30.472619  488406 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:39:30.472965  488406 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:39:30.473014  488406 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:39:30.692054  488406 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:39:30.692179  488406 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:39:32.194031  488406 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50154482s
	I1101 10:39:32.197149  488406 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:39:32.197536  488406 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:39:32.198430  488406 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:39:32.198980  488406 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:39:35.270912  489608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.854503946s)
	I1101 10:39:35.271173  489608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.967046378s)
	I1101 10:39:35.741931  489608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.775273901s)
	I1101 10:39:35.742143  489608 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.938522475s)
	I1101 10:39:35.742203  489608 api_server.go:72] duration metric: took 12.008781433s to wait for apiserver process to appear ...
	I1101 10:39:35.742231  489608 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:39:35.742278  489608 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:39:35.744802  489608 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-245904 addons enable metrics-server
	
	I1101 10:39:35.747742  489608 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 10:39:35.750568  489608 addons.go:515] duration metric: took 12.016843001s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 10:39:35.773670  489608 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:39:35.773737  489608 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:39:36.242941  489608 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:39:36.253173  489608 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 10:39:36.254326  489608 api_server.go:141] control plane version: v1.34.1
	I1101 10:39:36.254353  489608 api_server.go:131] duration metric: took 512.101761ms to wait for apiserver health ...
	I1101 10:39:36.254363  489608 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:39:36.262045  489608 system_pods.go:59] 8 kube-system pods found
	I1101 10:39:36.262087  489608 system_pods.go:61] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:39:36.262098  489608 system_pods.go:61] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:39:36.262104  489608 system_pods.go:61] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:39:36.262112  489608 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:39:36.262118  489608 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:39:36.262127  489608 system_pods.go:61] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:39:36.262135  489608 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:39:36.262145  489608 system_pods.go:61] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Running
	I1101 10:39:36.262150  489608 system_pods.go:74] duration metric: took 7.781785ms to wait for pod list to return data ...
	I1101 10:39:36.262165  489608 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:39:36.265034  489608 default_sa.go:45] found service account: "default"
	I1101 10:39:36.265059  489608 default_sa.go:55] duration metric: took 2.887633ms for default service account to be created ...
	I1101 10:39:36.265069  489608 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:39:36.279773  489608 system_pods.go:86] 8 kube-system pods found
	I1101 10:39:36.279807  489608 system_pods.go:89] "coredns-66bc5c9577-h2552" [f1f6d1e6-b67e-4d63-af54-505fd8515afa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:39:36.279820  489608 system_pods.go:89] "etcd-default-k8s-diff-port-245904" [a602d8b8-10ff-4e79-8464-b637f4def3d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:39:36.279826  489608 system_pods.go:89] "kindnet-5xtxk" [759fb4c8-8029-4d6e-a86c-3cf89ef062bc] Running
	I1101 10:39:36.279833  489608 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-245904" [6e6d8741-e9e3-49a1-b41d-14dd5c72747e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:39:36.279838  489608 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-245904" [9089ab65-b304-4a61-9df1-5c37ee3d2f90] Running
	I1101 10:39:36.279847  489608 system_pods.go:89] "kube-proxy-8d8hl" [309f6966-2ac7-41de-929d-dea12fe0b5a1] Running
	I1101 10:39:36.279853  489608 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-245904" [e756df5f-3d0e-40e8-be3e-0967ac382762] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:39:36.279864  489608 system_pods.go:89] "storage-provisioner" [6c55ca98-ef8e-4ba6-9b84-96fb59d6cb08] Running
	I1101 10:39:36.279871  489608 system_pods.go:126] duration metric: took 14.796606ms to wait for k8s-apps to be running ...
	I1101 10:39:36.279883  489608 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:39:36.279939  489608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:39:36.316244  489608 system_svc.go:56] duration metric: took 36.351299ms WaitForService to wait for kubelet
	I1101 10:39:36.316273  489608 kubeadm.go:587] duration metric: took 12.582850527s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:39:36.316296  489608 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:39:36.324483  489608 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:39:36.324516  489608 node_conditions.go:123] node cpu capacity is 2
	I1101 10:39:36.324530  489608 node_conditions.go:105] duration metric: took 8.227282ms to run NodePressure ...
	I1101 10:39:36.324542  489608 start.go:242] waiting for startup goroutines ...
	I1101 10:39:36.324549  489608 start.go:247] waiting for cluster config update ...
	I1101 10:39:36.324561  489608 start.go:256] writing updated cluster config ...
	I1101 10:39:36.324860  489608 ssh_runner.go:195] Run: rm -f paused
	I1101 10:39:36.334200  489608 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:39:36.338418  489608 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h2552" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:39:38.352786  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	I1101 10:39:38.243518  488406 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.044631694s
	I1101 10:39:39.031577  488406 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.831127965s
	I1101 10:39:41.201029  488406 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.003051432s
	I1101 10:39:41.225285  488406 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:39:41.243999  488406 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:39:41.266896  488406 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:39:41.267561  488406 kubeadm.go:319] [mark-control-plane] Marking the node auto-220636 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:39:41.283386  488406 kubeadm.go:319] [bootstrap-token] Using token: go5y2n.yhiz6aziwoo1svrx
	I1101 10:39:41.286470  488406 out.go:252]   - Configuring RBAC rules ...
	I1101 10:39:41.286594  488406 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:39:41.295820  488406 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:39:41.307202  488406 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:39:41.313089  488406 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:39:41.320570  488406 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:39:41.326711  488406 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:39:41.608684  488406 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:39:42.160222  488406 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:39:42.615660  488406 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:39:42.615679  488406 kubeadm.go:319] 
	I1101 10:39:42.615743  488406 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:39:42.615748  488406 kubeadm.go:319] 
	I1101 10:39:42.615829  488406 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:39:42.615834  488406 kubeadm.go:319] 
	I1101 10:39:42.615860  488406 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:39:42.615922  488406 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:39:42.615975  488406 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:39:42.615979  488406 kubeadm.go:319] 
	I1101 10:39:42.616036  488406 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:39:42.616040  488406 kubeadm.go:319] 
	I1101 10:39:42.616090  488406 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:39:42.616094  488406 kubeadm.go:319] 
	I1101 10:39:42.616156  488406 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:39:42.616235  488406 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:39:42.616306  488406 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:39:42.616311  488406 kubeadm.go:319] 
	I1101 10:39:42.616401  488406 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:39:42.616481  488406 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:39:42.616486  488406 kubeadm.go:319] 
	I1101 10:39:42.616574  488406 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token go5y2n.yhiz6aziwoo1svrx \
	I1101 10:39:42.616682  488406 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa \
	I1101 10:39:42.616703  488406 kubeadm.go:319] 	--control-plane 
	I1101 10:39:42.616707  488406 kubeadm.go:319] 
	I1101 10:39:42.616796  488406 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:39:42.616800  488406 kubeadm.go:319] 
	I1101 10:39:42.616886  488406 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token go5y2n.yhiz6aziwoo1svrx \
	I1101 10:39:42.616992  488406 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:182912b0f03827e406796cd84a990cb3d5d991be8f42c593d5bfa382c008b3fa 
	I1101 10:39:42.625501  488406 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:39:42.625768  488406 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:39:42.625887  488406 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:39:42.625902  488406 cni.go:84] Creating CNI manager for ""
	I1101 10:39:42.625910  488406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:39:42.629207  488406 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 10:39:40.386895  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:42.844929  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	I1101 10:39:42.632031  488406 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:39:42.639086  488406 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:39:42.639105  488406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:39:42.661497  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:39:43.093345  488406 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:39:43.093483  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:43.093564  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-220636 minikube.k8s.io/updated_at=2025_11_01T10_39_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=auto-220636 minikube.k8s.io/primary=true
	I1101 10:39:43.353573  488406 ops.go:34] apiserver oom_adj: -16
	I1101 10:39:43.353722  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:43.853846  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:44.354196  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:44.854222  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:45.354715  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:45.854560  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:46.353817  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:46.853840  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:47.354140  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:47.854188  488406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:39:48.146703  488406 kubeadm.go:1114] duration metric: took 5.053263737s to wait for elevateKubeSystemPrivileges
	I1101 10:39:48.146735  488406 kubeadm.go:403] duration metric: took 27.936831611s to StartCluster
	I1101 10:39:48.146762  488406 settings.go:142] acquiring lock: {Name:mkfd225b2e9d67088f5debc9e94443cc2f92c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:48.146825  488406 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:39:48.147857  488406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/kubeconfig: {Name:mk07a6f936f5b61a98c7ec4d5ab8d4f622b831fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:39:48.148075  488406 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:39:48.148210  488406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:39:48.148448  488406 config.go:182] Loaded profile config "auto-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:48.148428  488406 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:39:48.148549  488406 addons.go:70] Setting storage-provisioner=true in profile "auto-220636"
	I1101 10:39:48.148565  488406 addons.go:239] Setting addon storage-provisioner=true in "auto-220636"
	I1101 10:39:48.148590  488406 host.go:66] Checking if "auto-220636" exists ...
	I1101 10:39:48.149078  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:48.149396  488406 addons.go:70] Setting default-storageclass=true in profile "auto-220636"
	I1101 10:39:48.149434  488406 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-220636"
	I1101 10:39:48.149813  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:48.153898  488406 out.go:179] * Verifying Kubernetes components...
	I1101 10:39:48.157497  488406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:39:48.197070  488406 addons.go:239] Setting addon default-storageclass=true in "auto-220636"
	I1101 10:39:48.197113  488406 host.go:66] Checking if "auto-220636" exists ...
	I1101 10:39:48.197542  488406 cli_runner.go:164] Run: docker container inspect auto-220636 --format={{.State.Status}}
	I1101 10:39:48.210824  488406 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1101 10:39:45.350126  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:47.353389  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	I1101 10:39:48.213941  488406 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:39:48.213964  488406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:39:48.214036  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:48.248273  488406 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:39:48.248294  488406 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:39:48.248354  488406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-220636
	I1101 10:39:48.262554  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:48.289586  488406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/auto-220636/id_rsa Username:docker}
	I1101 10:39:48.518968  488406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:39:48.670402  488406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:39:48.867079  488406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:39:48.872326  488406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:39:49.656818  488406 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.137817224s)
	I1101 10:39:49.656847  488406 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:39:49.658838  488406 node_ready.go:35] waiting up to 15m0s for node "auto-220636" to be "Ready" ...
	I1101 10:39:50.107159  488406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.234795182s)
	I1101 10:39:50.110634  488406 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 10:39:50.113867  488406 addons.go:515] duration metric: took 1.965430944s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 10:39:50.163639  488406 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-220636" context rescaled to 1 replicas
	W1101 10:39:49.850950  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:52.344640  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:51.662795  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:39:54.162138  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:39:54.843824  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:56.845146  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:59.344491  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:39:56.162610  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:39:58.661860  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:01.843637  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:40:03.843730  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	W1101 10:40:00.662748  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:03.161824  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:05.162292  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:06.343918  489608 pod_ready.go:104] pod "coredns-66bc5c9577-h2552" is not "Ready", error: <nil>
	I1101 10:40:07.344556  489608 pod_ready.go:94] pod "coredns-66bc5c9577-h2552" is "Ready"
	I1101 10:40:07.344584  489608 pod_ready.go:86] duration metric: took 31.006139856s for pod "coredns-66bc5c9577-h2552" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.346874  489608 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.351329  489608 pod_ready.go:94] pod "etcd-default-k8s-diff-port-245904" is "Ready"
	I1101 10:40:07.351355  489608 pod_ready.go:86] duration metric: took 4.451377ms for pod "etcd-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.354149  489608 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.362799  489608 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-245904" is "Ready"
	I1101 10:40:07.362837  489608 pod_ready.go:86] duration metric: took 8.663284ms for pod "kube-apiserver-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.365375  489608 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.542547  489608 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-245904" is "Ready"
	I1101 10:40:07.542583  489608 pod_ready.go:86] duration metric: took 177.182885ms for pod "kube-controller-manager-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:07.744233  489608 pod_ready.go:83] waiting for pod "kube-proxy-8d8hl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:08.142528  489608 pod_ready.go:94] pod "kube-proxy-8d8hl" is "Ready"
	I1101 10:40:08.142556  489608 pod_ready.go:86] duration metric: took 398.296899ms for pod "kube-proxy-8d8hl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:08.342891  489608 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:08.744790  489608 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-245904" is "Ready"
	I1101 10:40:08.744869  489608 pod_ready.go:86] duration metric: took 401.949244ms for pod "kube-scheduler-default-k8s-diff-port-245904" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:40:08.744900  489608 pod_ready.go:40] duration metric: took 32.410667664s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:40:08.803504  489608 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:40:08.806726  489608 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-245904" cluster and "default" namespace by default
	W1101 10:40:07.661516  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:09.662565  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:12.162223  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:14.162290  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:16.662072  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:19.161910  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:21.163279  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	W1101 10:40:23.662170  488406 node_ready.go:57] node "auto-220636" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.643380027Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a69a5a79-06b8-4be5-8959-811c57d66c55 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.644633383Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3077858-ce90-4bee-b920-4bbc4a566c31 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.645633499Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh/dashboard-metrics-scraper" id=c3aeeb2e-14b2-4bce-b722-6327fcc5812c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.645796144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.655831431Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.656553692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.672590824Z" level=info msg="Created container 354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh/dashboard-metrics-scraper" id=c3aeeb2e-14b2-4bce-b722-6327fcc5812c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.676903253Z" level=info msg="Starting container: 354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5" id=c072d766-f358-4a2b-bacb-c53afd6db573 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:40:13 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:13.679457168Z" level=info msg="Started container" PID=1681 containerID=354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh/dashboard-metrics-scraper id=c072d766-f358-4a2b-bacb-c53afd6db573 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8b73ba8303ffb1c4480ae72c741a9f8e1d960bc240535aafacf3b5b710c8609
	Nov 01 10:40:13 default-k8s-diff-port-245904 conmon[1679]: conmon 354e6c29f4ba8d02bcc9 <ninfo>: container 1681 exited with status 1
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.057396175Z" level=info msg="Removing container: c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1" id=d239d791-11a1-4ed5-a7af-a7f53044723d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.06518009Z" level=info msg="Error loading conmon cgroup of container c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1: cgroup deleted" id=d239d791-11a1-4ed5-a7af-a7f53044723d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.069461971Z" level=info msg="Removed container c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh/dashboard-metrics-scraper" id=d239d791-11a1-4ed5-a7af-a7f53044723d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.777253165Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.781814845Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.781858136Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.781882465Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.785551871Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.785621501Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.785648792Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.789292704Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.789333706Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.789432793Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.794024414Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:40:14 default-k8s-diff-port-245904 crio[656]: time="2025-11-01T10:40:14.794062856Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	354e6c29f4ba8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago       Exited              dashboard-metrics-scraper   2                   d8b73ba8303ff       dashboard-metrics-scraper-6ffb444bf9-gl8hh             kubernetes-dashboard
	d46a0edeb9401       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago       Running             storage-provisioner         2                   8f4fc819c76e5       storage-provisioner                                    kube-system
	4e1c18e366f01       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   244ee22402079       kubernetes-dashboard-855c9754f9-l727q                  kubernetes-dashboard
	98c11ffd4d3f9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   cdda822eb1b64       coredns-66bc5c9577-h2552                               kube-system
	f8a20eb3878fb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   339a0d738e46c       kube-proxy-8d8hl                                       kube-system
	b7b00512262ae       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   ba9718875aa11       kindnet-5xtxk                                          kube-system
	b839606527a0b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   8f4fc819c76e5       storage-provisioner                                    kube-system
	3cb40663dbe09       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   55d43120cedb8       busybox                                                default
	d782666800538       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   911683cffce8e       kube-apiserver-default-k8s-diff-port-245904            kube-system
	f9910db4dfdda       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   dacd1b00f3201       kube-controller-manager-default-k8s-diff-port-245904   kube-system
	9cfafd062ccb4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   58b5306c33908       kube-scheduler-default-k8s-diff-port-245904            kube-system
	30e834d8a77dc       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   c67a1944f6b69       etcd-default-k8s-diff-port-245904                      kube-system
	
	
	==> coredns [98c11ffd4d3f91309c84aba212eabefcb80ccd370b1c392fdbd639ef33c9cf14] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60276 - 46647 "HINFO IN 840714791110925119.7241798148311781223. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021753219s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-245904
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-245904
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=default-k8s-diff-port-245904
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_38_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-245904
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:40:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:40:13 +0000   Sat, 01 Nov 2025 10:37:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:40:13 +0000   Sat, 01 Nov 2025 10:37:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:40:13 +0000   Sat, 01 Nov 2025 10:37:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:40:13 +0000   Sat, 01 Nov 2025 10:38:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-245904
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                50f868bb-abe9-4a86-b184-01355addeabf
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-h2552                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-245904                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-5xtxk                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-245904             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-245904    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-8d8hl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-245904             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gl8hh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-l727q                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 49s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-245904 event: Registered Node default-k8s-diff-port-245904 in Controller
	  Normal   NodeReady                101s                   kubelet          Node default-k8s-diff-port-245904 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-245904 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node default-k8s-diff-port-245904 event: Registered Node default-k8s-diff-port-245904 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:18] overlayfs: idmapped layers are currently not supported
	[ +27.490641] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	[ +26.122524] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[  +9.289237] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [30e834d8a77dcb064a27c0c12896c576a1ecda9002b655df2d47b3c124e33ac2] <==
	{"level":"warn","ts":"2025-11-01T10:39:28.461805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.490586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.547442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.585903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.612696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.685510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.765520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.883767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:28.946630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.069761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.078680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.174065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.205036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.269074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.338010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.375782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.417996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.482918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.525435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.571243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:29.779851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:39:32.924794Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.969175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:public-info-viewer\" limit:1 ","response":"range_response_count:1 size:613"}
	{"level":"info","ts":"2025-11-01T10:39:32.924858Z","caller":"traceutil/trace.go:172","msg":"trace[1552577361] range","detail":"{range_begin:/registry/clusterroles/system:public-info-viewer; range_end:; response_count:1; response_revision:497; }","duration":"119.048257ms","start":"2025-11-01T10:39:32.805796Z","end":"2025-11-01T10:39:32.924844Z","steps":["trace[1552577361] 'agreement among raft nodes before linearized reading'  (duration: 21.851914ms)","trace[1552577361] 'range keys from in-memory index tree'  (duration: 97.039303ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:39:32.925420Z","caller":"traceutil/trace.go:172","msg":"trace[1477835500] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"119.822761ms","start":"2025-11-01T10:39:32.805583Z","end":"2025-11-01T10:39:32.925406Z","steps":["trace[1477835500] 'process raft request'  (duration: 22.142232ms)","trace[1477835500] 'compare'  (duration: 97.271841ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:39:33.955085Z","caller":"traceutil/trace.go:172","msg":"trace[1971129672] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"196.901711ms","start":"2025-11-01T10:39:33.758165Z","end":"2025-11-01T10:39:33.955067Z","steps":["trace[1971129672] 'process raft request'  (duration: 165.622001ms)","trace[1971129672] 'compare'  (duration: 31.129061ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:40:26 up  2:22,  0 user,  load average: 3.70, 4.08, 3.38
	Linux default-k8s-diff-port-245904 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b7b00512262aea3dcc035878abe865da07ea524a984e03217db4298decd3413f] <==
	I1101 10:39:34.443288       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:39:34.443518       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:39:34.443650       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:39:34.443661       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:39:34.443674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:39:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:39:34.776390       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:39:34.776464       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:39:34.776499       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:39:34.821978       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:40:04.776687       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:40:04.823283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:40:04.823285       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:40:04.823483       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:40:06.223451       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:40:06.223484       1 metrics.go:72] Registering metrics
	I1101 10:40:06.223536       1 controller.go:711] "Syncing nftables rules"
	I1101 10:40:14.776843       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:40:14.776939       1 main.go:301] handling current node
	I1101 10:40:24.777866       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:40:24.781968       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d782666800538b469e418a5f838868b74612a893a1e3a0765dd3ca1190d13821] <==
	I1101 10:39:32.242033       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:39:32.259480       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:39:32.260013       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:39:32.260036       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:39:32.260045       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:39:32.260051       1 cache.go:39] Caches are synced for autoregister controller
	E1101 10:39:32.283685       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:39:32.293877       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:39:32.294003       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:39:32.303350       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:39:32.316720       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:39:32.317000       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:39:32.338999       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:39:32.375788       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:39:32.472374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:39:32.724734       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:39:34.072645       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:39:34.663839       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:39:35.032082       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:39:35.210730       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:39:35.657450       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.33.67"}
	I1101 10:39:35.720028       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.209.79"}
	I1101 10:39:38.095608       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:39:38.342380       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:39:38.488736       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f9910db4dfddad6c3e5a4f8b750b121b8871d21bdf0d44561df2a5718b2e3e39] <==
	I1101 10:39:38.022378       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:39:38.022499       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-245904"
	I1101 10:39:38.022554       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:39:38.012790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:39:38.022611       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:39:38.022618       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:39:38.012747       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:39:37.981893       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:39:38.012833       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:39:38.032267       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:39:38.032394       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:39:38.032541       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:39:38.042628       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:39:37.981879       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:39:38.045550       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:39:38.012473       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:39:38.012762       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:39:38.012801       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:39:38.012809       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:39:38.012824       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:39:38.012841       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:39:38.077629       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:39:38.077764       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:39:38.080114       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:39:38.080334       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [f8a20eb3878fb74917aa7efd04e8592e15bb898b2148768ed94f97fa6c1e0aff] <==
	I1101 10:39:36.104815       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:39:36.314812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:39:36.421582       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:39:36.421657       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:39:36.421800       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:39:36.928716       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:39:36.928833       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:39:36.950976       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:39:36.951400       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:39:36.951626       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:39:36.952934       1 config.go:200] "Starting service config controller"
	I1101 10:39:36.953003       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:39:36.953057       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:39:36.953100       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:39:36.953148       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:39:36.953186       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:39:36.953899       1 config.go:309] "Starting node config controller"
	I1101 10:39:36.953975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:39:36.954008       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:39:37.054368       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:39:37.054369       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:39:37.054405       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9cfafd062ccb475a6b1b6b434b2b13c9f646113eeda200d84df703684661e573] <==
	I1101 10:39:27.175255       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:39:36.700629       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:39:36.700868       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:39:36.721622       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:39:36.722986       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:39:36.723056       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:39:36.723109       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:39:36.723935       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:39:36.724008       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:39:36.724093       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:39:36.724126       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:39:36.823841       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:39:36.825333       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:39:36.825428       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:39:38 default-k8s-diff-port-245904 kubelet[783]: E1101 10:39:38.502849     783 status_manager.go:1018] "Failed to get status for pod" err="pods \"dashboard-metrics-scraper-6ffb444bf9-gl8hh\" is forbidden: User \"system:node:default-k8s-diff-port-245904\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'default-k8s-diff-port-245904' and this object" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh"
	Nov 01 10:39:38 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:38.639237     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b29821b8-c8ed-4661-be4e-54b3ffcd852b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-l727q\" (UID: \"b29821b8-c8ed-4661-be4e-54b3ffcd852b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l727q"
	Nov 01 10:39:38 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:38.639309     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjz5c\" (UniqueName: \"kubernetes.io/projected/ed252192-818e-45b5-82a4-86dd6cb408b9-kube-api-access-jjz5c\") pod \"dashboard-metrics-scraper-6ffb444bf9-gl8hh\" (UID: \"ed252192-818e-45b5-82a4-86dd6cb408b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh"
	Nov 01 10:39:38 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:38.639338     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ed252192-818e-45b5-82a4-86dd6cb408b9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gl8hh\" (UID: \"ed252192-818e-45b5-82a4-86dd6cb408b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh"
	Nov 01 10:39:38 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:38.639356     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvvk7\" (UniqueName: \"kubernetes.io/projected/b29821b8-c8ed-4661-be4e-54b3ffcd852b-kube-api-access-vvvk7\") pod \"kubernetes-dashboard-855c9754f9-l727q\" (UID: \"b29821b8-c8ed-4661-be4e-54b3ffcd852b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l727q"
	Nov 01 10:39:39 default-k8s-diff-port-245904 kubelet[783]: W1101 10:39:39.778556     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a7be6b4a2a8803f6a71a3112e4c837278629125efa653cc7907bcf6a2648ca5e/crio-244ee2240207915c67d54df3d42c89ac8b45d65cbc4307e8cf776711c0d55449 WatchSource:0}: Error finding container 244ee2240207915c67d54df3d42c89ac8b45d65cbc4307e8cf776711c0d55449: Status 404 returned error can't find the container with id 244ee2240207915c67d54df3d42c89ac8b45d65cbc4307e8cf776711c0d55449
	Nov 01 10:39:52 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:52.986885     783 scope.go:117] "RemoveContainer" containerID="f5bccd49a2305d3009a385e5b58d31dcbd715f902727659225f510543796928e"
	Nov 01 10:39:53 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:53.018440     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l727q" podStartSLOduration=8.540587758000001 podStartE2EDuration="15.01842248s" podCreationTimestamp="2025-11-01 10:39:38 +0000 UTC" firstStartedPulling="2025-11-01 10:39:39.790315131 +0000 UTC m=+17.394798322" lastFinishedPulling="2025-11-01 10:39:46.268149853 +0000 UTC m=+23.872633044" observedRunningTime="2025-11-01 10:39:46.989518763 +0000 UTC m=+24.594001971" watchObservedRunningTime="2025-11-01 10:39:53.01842248 +0000 UTC m=+30.622905679"
	Nov 01 10:39:53 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:53.991477     783 scope.go:117] "RemoveContainer" containerID="f5bccd49a2305d3009a385e5b58d31dcbd715f902727659225f510543796928e"
	Nov 01 10:39:53 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:53.992387     783 scope.go:117] "RemoveContainer" containerID="c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1"
	Nov 01 10:39:53 default-k8s-diff-port-245904 kubelet[783]: E1101 10:39:53.992651     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gl8hh_kubernetes-dashboard(ed252192-818e-45b5-82a4-86dd6cb408b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9"
	Nov 01 10:39:54 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:54.995764     783 scope.go:117] "RemoveContainer" containerID="c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1"
	Nov 01 10:39:54 default-k8s-diff-port-245904 kubelet[783]: E1101 10:39:54.995957     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gl8hh_kubernetes-dashboard(ed252192-818e-45b5-82a4-86dd6cb408b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9"
	Nov 01 10:39:59 default-k8s-diff-port-245904 kubelet[783]: I1101 10:39:59.683388     783 scope.go:117] "RemoveContainer" containerID="c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1"
	Nov 01 10:39:59 default-k8s-diff-port-245904 kubelet[783]: E1101 10:39:59.683606     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gl8hh_kubernetes-dashboard(ed252192-818e-45b5-82a4-86dd6cb408b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9"
	Nov 01 10:40:06 default-k8s-diff-port-245904 kubelet[783]: I1101 10:40:06.030242     783 scope.go:117] "RemoveContainer" containerID="b839606527a0b636e484040e6f65caadbe27fa5fd6f705b9d1a78d038a9ccdac"
	Nov 01 10:40:13 default-k8s-diff-port-245904 kubelet[783]: I1101 10:40:13.642828     783 scope.go:117] "RemoveContainer" containerID="c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1"
	Nov 01 10:40:14 default-k8s-diff-port-245904 kubelet[783]: I1101 10:40:14.054552     783 scope.go:117] "RemoveContainer" containerID="c2fe8cce7171c116c3c804ee25bb647faec49744b3eea198d88365dca56075b1"
	Nov 01 10:40:14 default-k8s-diff-port-245904 kubelet[783]: I1101 10:40:14.055360     783 scope.go:117] "RemoveContainer" containerID="354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5"
	Nov 01 10:40:14 default-k8s-diff-port-245904 kubelet[783]: E1101 10:40:14.055689     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gl8hh_kubernetes-dashboard(ed252192-818e-45b5-82a4-86dd6cb408b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9"
	Nov 01 10:40:19 default-k8s-diff-port-245904 kubelet[783]: I1101 10:40:19.682535     783 scope.go:117] "RemoveContainer" containerID="354e6c29f4ba8d02bcc9650f7c3443668404bab4cd3e617a9467f65a59e0efc5"
	Nov 01 10:40:19 default-k8s-diff-port-245904 kubelet[783]: E1101 10:40:19.683220     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gl8hh_kubernetes-dashboard(ed252192-818e-45b5-82a4-86dd6cb408b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gl8hh" podUID="ed252192-818e-45b5-82a4-86dd6cb408b9"
	Nov 01 10:40:21 default-k8s-diff-port-245904 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:40:21 default-k8s-diff-port-245904 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:40:21 default-k8s-diff-port-245904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4e1c18e366f011597bd4500e494e129d7e239722c028290b019581f02af5459f] <==
	2025/11/01 10:39:46 Using namespace: kubernetes-dashboard
	2025/11/01 10:39:46 Using in-cluster config to connect to apiserver
	2025/11/01 10:39:46 Using secret token for csrf signing
	2025/11/01 10:39:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:39:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:39:46 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:39:46 Generating JWE encryption key
	2025/11/01 10:39:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:39:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:39:47 Initializing JWE encryption key from synchronized object
	2025/11/01 10:39:47 Creating in-cluster Sidecar client
	2025/11/01 10:39:47 Serving insecurely on HTTP port: 9090
	2025/11/01 10:39:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:40:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:39:46 Starting overwatch
	
	
	==> storage-provisioner [b839606527a0b636e484040e6f65caadbe27fa5fd6f705b9d1a78d038a9ccdac] <==
	I1101 10:39:35.298351       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:40:05.520740       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d46a0edeb94014e2b6de899870e120c1e9663026c65d0bae3809f3f4a5097fd4] <==
	I1101 10:40:06.077935       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:40:06.092658       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:40:06.092791       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:40:06.095096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:09.549850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:13.810270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:17.408858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:20.468625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:23.490761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:23.502844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:40:23.504774       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e194f99-8f93-4855-b159-998a98b1e129", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-245904_e3b2b290-6204-4191-bc9c-12b4a7fe5bf8 became leader
	I1101 10:40:23.507535       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:40:23.507758       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-245904_e3b2b290-6204-4191-bc9c-12b4a7fe5bf8!
	W1101 10:40:23.522536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:23.527280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:40:23.608682       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-245904_e3b2b290-6204-4191-bc9c-12b4a7fe5bf8!
	W1101 10:40:25.530250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:40:25.535007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904: exit status 2 (377.39694ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-245904 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.51s)
E1101 10:46:13.342282  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:15.040042  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (256/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.6
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.04
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.24
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.16
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 169.83
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 9.8
48 TestAddons/StoppedEnableDisable 12.43
49 TestCertOptions 42.01
50 TestCertExpiration 249.3
52 TestForceSystemdFlag 41.32
53 TestForceSystemdEnv 49.69
58 TestErrorSpam/setup 33.17
59 TestErrorSpam/start 0.78
60 TestErrorSpam/status 1.13
61 TestErrorSpam/pause 6.26
62 TestErrorSpam/unpause 5.93
63 TestErrorSpam/stop 1.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 77.41
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 55.56
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
75 TestFunctional/serial/CacheCmd/cache/add_local 1.11
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.88
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 31.43
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.45
86 TestFunctional/serial/LogsFileCmd 1.65
87 TestFunctional/serial/InvalidService 4.16
89 TestFunctional/parallel/ConfigCmd 0.51
90 TestFunctional/parallel/DashboardCmd 13.49
91 TestFunctional/parallel/DryRun 0.64
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.08
98 TestFunctional/parallel/AddonsCmd 0.22
99 TestFunctional/parallel/PersistentVolumeClaim 26
101 TestFunctional/parallel/SSHCmd 0.7
102 TestFunctional/parallel/CpCmd 2.38
104 TestFunctional/parallel/FileSync 0.35
105 TestFunctional/parallel/CertSync 2.18
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
113 TestFunctional/parallel/License 0.32
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 8.05
130 TestFunctional/parallel/MountCmd/specific-port 1.81
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
132 TestFunctional/parallel/ServiceCmd/List 0.63
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
137 TestFunctional/parallel/Version/short 0.09
138 TestFunctional/parallel/Version/components 1.02
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.02
144 TestFunctional/parallel/ImageCommands/Setup 0.6
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 207.19
163 TestMultiControlPlane/serial/DeployApp 7.64
164 TestMultiControlPlane/serial/PingHostFromPods 1.53
165 TestMultiControlPlane/serial/AddWorkerNode 61.47
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
168 TestMultiControlPlane/serial/CopyFile 20.02
169 TestMultiControlPlane/serial/StopSecondaryNode 12.89
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
171 TestMultiControlPlane/serial/RestartSecondaryNode 29.27
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.46
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 118.79
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.06
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
176 TestMultiControlPlane/serial/StopCluster 25.59
185 TestJSONOutput/start/Command 80.62
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.83
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.28
210 TestKicCustomNetwork/create_custom_network 38.75
211 TestKicCustomNetwork/use_default_bridge_network 38.06
212 TestKicExistingNetwork 35.92
213 TestKicCustomSubnet 36.16
214 TestKicStaticIP 38.85
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 68.08
219 TestMountStart/serial/StartWithMountFirst 8.92
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 9.32
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.77
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 141.45
231 TestMultiNode/serial/DeployApp2Nodes 5.21
232 TestMultiNode/serial/PingHostFrom2Pods 0.93
233 TestMultiNode/serial/AddNode 59.58
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.49
237 TestMultiNode/serial/StopNode 2.38
238 TestMultiNode/serial/StartAfterStop 8.33
239 TestMultiNode/serial/RestartKeepsNodes 73.57
240 TestMultiNode/serial/DeleteNode 5.61
241 TestMultiNode/serial/StopMultiNode 23.94
242 TestMultiNode/serial/RestartMultiNode 53.18
243 TestMultiNode/serial/ValidateNameConflict 36.84
248 TestPreload 126.16
250 TestScheduledStopUnix 109.3
253 TestInsufficientStorage 12.57
254 TestRunningBinaryUpgrade 50.92
256 TestKubernetesUpgrade 212.53
257 TestMissingContainerUpgrade 116.65
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 45.01
261 TestNoKubernetes/serial/StartWithStopK8s 59.98
262 TestNoKubernetes/serial/Start 11.27
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
264 TestNoKubernetes/serial/ProfileList 1.16
265 TestNoKubernetes/serial/Stop 1.39
266 TestNoKubernetes/serial/StartNoArgs 7.54
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
268 TestStoppedBinaryUpgrade/Setup 0.7
269 TestStoppedBinaryUpgrade/Upgrade 67.54
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
279 TestPause/serial/Start 84.59
280 TestPause/serial/SecondStartNoReconfiguration 27.57
289 TestNetworkPlugins/group/false 5.61
294 TestStartStop/group/old-k8s-version/serial/FirstStart 63.12
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
297 TestStartStop/group/old-k8s-version/serial/Stop 12.06
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
299 TestStartStop/group/old-k8s-version/serial/SecondStart 49.5
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.13
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
305 TestStartStop/group/no-preload/serial/FirstStart 74.69
307 TestStartStop/group/embed-certs/serial/FirstStart 88.59
308 TestStartStop/group/no-preload/serial/DeployApp 8.32
310 TestStartStop/group/no-preload/serial/Stop 12.06
311 TestStartStop/group/embed-certs/serial/DeployApp 8.41
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
313 TestStartStop/group/no-preload/serial/SecondStart 53
315 TestStartStop/group/embed-certs/serial/Stop 12.42
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
317 TestStartStop/group/embed-certs/serial/SecondStart 61.19
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.59
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
329 TestStartStop/group/newest-cni/serial/FirstStart 40.43
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/Stop 1.35
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
334 TestStartStop/group/newest-cni/serial/SecondStart 15.79
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.45
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
341 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.3
342 TestNetworkPlugins/group/auto/Start 86.42
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.73
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
349 TestNetworkPlugins/group/kindnet/Start 89.79
350 TestNetworkPlugins/group/auto/KubeletFlags 0.38
351 TestNetworkPlugins/group/auto/NetCatPod 12.36
352 TestNetworkPlugins/group/auto/DNS 0.22
353 TestNetworkPlugins/group/auto/Localhost 0.17
354 TestNetworkPlugins/group/auto/HairPin 0.17
355 TestNetworkPlugins/group/calico/Start 61.08
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/calico/KubeletFlags 0.32
361 TestNetworkPlugins/group/calico/NetCatPod 11.27
362 TestNetworkPlugins/group/kindnet/DNS 0.21
363 TestNetworkPlugins/group/kindnet/Localhost 0.14
364 TestNetworkPlugins/group/kindnet/HairPin 0.14
365 TestNetworkPlugins/group/calico/DNS 0.23
366 TestNetworkPlugins/group/calico/Localhost 0.23
367 TestNetworkPlugins/group/calico/HairPin 0.19
368 TestNetworkPlugins/group/custom-flannel/Start 71.51
369 TestNetworkPlugins/group/enable-default-cni/Start 81.04
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
372 TestNetworkPlugins/group/custom-flannel/DNS 0.15
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.32
377 TestNetworkPlugins/group/flannel/Start 67.36
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
381 TestNetworkPlugins/group/bridge/Start 85.44
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
384 TestNetworkPlugins/group/flannel/NetCatPod 11.26
385 TestNetworkPlugins/group/flannel/DNS 0.16
386 TestNetworkPlugins/group/flannel/Localhost 0.13
387 TestNetworkPlugins/group/flannel/HairPin 0.14
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 10.3
390 TestNetworkPlugins/group/bridge/DNS 0.16
391 TestNetworkPlugins/group/bridge/Localhost 0.13
392 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (7.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-632367 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-632367 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.596590988s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 09:28:37.108029  287135 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 09:28:37.108125  287135 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-632367
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-632367: exit status 85 (92.833524ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-632367 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-632367 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:28:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:28:29.554167  287141 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:28:29.554305  287141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:29.554317  287141 out.go:374] Setting ErrFile to fd 2...
	I1101 09:28:29.554336  287141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:29.554633  287141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	W1101 09:28:29.554817  287141 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21833-285274/.minikube/config/config.json: open /home/jenkins/minikube-integration/21833-285274/.minikube/config/config.json: no such file or directory
	I1101 09:28:29.555239  287141 out.go:368] Setting JSON to true
	I1101 09:28:29.556077  287141 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4259,"bootTime":1761985051,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:28:29.556145  287141 start.go:143] virtualization:  
	I1101 09:28:29.560146  287141 out.go:99] [download-only-632367] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1101 09:28:29.560356  287141 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 09:28:29.560414  287141 notify.go:221] Checking for updates...
	I1101 09:28:29.563408  287141 out.go:171] MINIKUBE_LOCATION=21833
	I1101 09:28:29.566433  287141 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:28:29.569395  287141 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:28:29.572385  287141 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:28:29.575380  287141 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 09:28:29.581107  287141 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:28:29.581414  287141 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:28:29.612596  287141 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:28:29.612698  287141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:29.670267  287141 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 09:28:29.660307296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:28:29.670383  287141 docker.go:319] overlay module found
	I1101 09:28:29.673503  287141 out.go:99] Using the docker driver based on user configuration
	I1101 09:28:29.673541  287141 start.go:309] selected driver: docker
	I1101 09:28:29.673552  287141 start.go:930] validating driver "docker" against <nil>
	I1101 09:28:29.673660  287141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:29.732613  287141 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 09:28:29.723772842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:28:29.732762  287141 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:28:29.733061  287141 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 09:28:29.733217  287141 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:28:29.736210  287141 out.go:171] Using Docker driver with root privileges
	I1101 09:28:29.739144  287141 cni.go:84] Creating CNI manager for ""
	I1101 09:28:29.739212  287141 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:28:29.739227  287141 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:28:29.739313  287141 start.go:353] cluster config:
	{Name:download-only-632367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-632367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:28:29.742189  287141 out.go:99] Starting "download-only-632367" primary control-plane node in "download-only-632367" cluster
	I1101 09:28:29.742210  287141 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:28:29.745129  287141 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:28:29.745160  287141 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:28:29.745280  287141 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:28:29.761360  287141 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:28:29.762092  287141 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:28:29.762219  287141 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:28:29.801337  287141 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 09:28:29.801362  287141 cache.go:59] Caching tarball of preloaded images
	I1101 09:28:29.801519  287141 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:28:29.804883  287141 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 09:28:29.804917  287141 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1101 09:28:29.894828  287141 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1101 09:28:29.894986  287141 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 09:28:32.792562  287141 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 09:28:32.792974  287141 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/download-only-632367/config.json ...
	I1101 09:28:32.793011  287141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/download-only-632367/config.json: {Name:mke60a973d52c464ca62c382cac486dbd40c6ead Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:32.793208  287141 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:28:32.793407  287141 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21833-285274/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-632367 host does not exist
	  To start a cluster, run: "minikube start -p download-only-632367"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-632367
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-775162 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-775162 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.041843283s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 09:28:41.606932  287135 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 09:28:41.606969  287135 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-285274/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-775162
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-775162: exit status 85 (236.790899ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-632367 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-632367 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ delete  │ -p download-only-632367                                                                                                                                                   │ download-only-632367 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ start   │ -o=json --download-only -p download-only-775162 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-775162 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:28:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:28:37.607918  287340 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:28:37.608229  287340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:37.608245  287340 out.go:374] Setting ErrFile to fd 2...
	I1101 09:28:37.608251  287340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:37.608551  287340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:28:37.609011  287340 out.go:368] Setting JSON to true
	I1101 09:28:37.609880  287340 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4267,"bootTime":1761985051,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:28:37.609974  287340 start.go:143] virtualization:  
	I1101 09:28:37.613285  287340 out.go:99] [download-only-775162] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:28:37.613492  287340 notify.go:221] Checking for updates...
	I1101 09:28:37.616469  287340 out.go:171] MINIKUBE_LOCATION=21833
	I1101 09:28:37.619399  287340 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:28:37.622305  287340 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:28:37.625180  287340 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:28:37.628056  287340 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 09:28:37.633718  287340 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:28:37.634023  287340 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:28:37.658679  287340 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:28:37.658904  287340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:37.719015  287340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-11-01 09:28:37.709100572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:28:37.719122  287340 docker.go:319] overlay module found
	I1101 09:28:37.722198  287340 out.go:99] Using the docker driver based on user configuration
	I1101 09:28:37.722241  287340 start.go:309] selected driver: docker
	I1101 09:28:37.722248  287340 start.go:930] validating driver "docker" against <nil>
	I1101 09:28:37.722369  287340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:37.773881  287340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-11-01 09:28:37.764588805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:28:37.774028  287340 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:28:37.774323  287340 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 09:28:37.774471  287340 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:28:37.777583  287340 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-775162 host does not exist
	  To start a cluster, run: "minikube start -p download-only-775162"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-775162
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 09:28:42.939179  287135 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-960233 --alsologtostderr --binary-mirror http://127.0.0.1:36239 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-960233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-960233
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-720971
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-720971: exit status 85 (72.688088ms)

                                                
                                                
-- stdout --
	* Profile "addons-720971" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-720971"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-720971
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-720971: exit status 85 (87.339729ms)

                                                
                                                
-- stdout --
	* Profile "addons-720971" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-720971"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (169.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-720971 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-720971 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m49.830864218s)
--- PASS: TestAddons/Setup (169.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-720971 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-720971 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.8s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-720971 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-720971 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f9c19b18-e0d8-4eae-887d-9c6a70258ee3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f9c19b18-e0d8-4eae-887d-9c6a70258ee3] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004265725s
addons_test.go:694: (dbg) Run:  kubectl --context addons-720971 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-720971 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-720971 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-720971 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-720971
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-720971: (12.14113795s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-720971
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-720971
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-720971
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestCertOptions (42.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-082900 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-082900 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (39.142354358s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-082900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-082900 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-082900 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-082900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-082900
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-082900: (2.122096458s)
--- PASS: TestCertOptions (42.01s)

                                                
                                    
x
+
TestCertExpiration (249.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-459318 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-459318 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.296866549s)
E1101 10:31:34.715416  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-459318 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-459318 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (23.799673686s)
helpers_test.go:175: Cleaning up "cert-expiration-459318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-459318
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-459318: (3.198457787s)
--- PASS: TestCertExpiration (249.30s)

                                                
                                    
x
+
TestForceSystemdFlag (41.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-854151 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-854151 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.06866195s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-854151 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-854151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-854151
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-854151: (2.859138719s)
--- PASS: TestForceSystemdFlag (41.32s)

                                                
                                    
x
+
TestForceSystemdEnv (49.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-065424 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-065424 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (46.633909754s)
helpers_test.go:175: Cleaning up "force-systemd-env-065424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-065424
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-065424: (3.053148121s)
--- PASS: TestForceSystemdEnv (49.69s)

                                                
                                    
x
+
TestErrorSpam/setup (33.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-682803 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-682803 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-682803 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-682803 --driver=docker  --container-runtime=crio: (33.173876649s)
--- PASS: TestErrorSpam/setup (33.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (6.26s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 pause: exit status 80 (2.276553695s)

                                                
                                                
-- stdout --
	* Pausing node nospam-682803 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 pause: exit status 80 (2.390033418s)

                                                
                                                
-- stdout --
	* Pausing node nospam-682803 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 pause: exit status 80 (1.593817308s)

                                                
                                                
-- stdout --
	* Pausing node nospam-682803 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.26s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 unpause: exit status 80 (1.998498043s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-682803 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 unpause: exit status 80 (1.930751891s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-682803 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 unpause: exit status 80 (2.002108232s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-682803 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.93s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 stop: (1.306119176s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-682803 --log_dir /tmp/nospam-682803 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21833-285274/.minikube/files/etc/test/nested/copy/287135/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034342 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1101 09:36:34.721906  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:34.728287  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:34.739733  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:34.761209  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:34.802634  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:34.884123  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:35.045605  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:35.367490  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:36.009618  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:37.291127  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:39.852753  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:44.975314  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:55.217485  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-034342 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.408474271s)
--- PASS: TestFunctional/serial/StartWithProxy (77.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 09:37:08.167918  287135 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034342 --alsologtostderr -v=8
E1101 09:37:15.698954  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:37:56.660948  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-034342 --alsologtostderr -v=8: (55.560521998s)
functional_test.go:678: soft start took 55.561015225s for "functional-034342" cluster.
I1101 09:38:03.728784  287135 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (55.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-034342 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-034342 cache add registry.k8s.io/pause:3.1: (1.294410229s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-034342 cache add registry.k8s.io/pause:3.3: (1.165237828s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-034342 cache add registry.k8s.io/pause:latest: (1.104383511s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-034342 /tmp/TestFunctionalserialCacheCmdcacheadd_local2309819093/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 cache add minikube-local-cache-test:functional-034342
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 cache delete minikube-local-cache-test:functional-034342
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-034342
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (313.333956ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 kubectl -- --context functional-034342 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-034342 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034342 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-034342 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.42842424s)
functional_test.go:776: restart took 31.428526609s for "functional-034342" cluster.
I1101 09:38:42.709377  287135 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (31.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-034342 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-034342 logs: (1.449018008s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 logs --file /tmp/TestFunctionalserialLogsFileCmd1563775426/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-034342 logs --file /tmp/TestFunctionalserialLogsFileCmd1563775426/001/logs.txt: (1.644673779s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-034342 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-034342
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-034342: exit status 115 (368.56174ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32250 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-034342 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 config get cpus: exit status 14 (98.974253ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 config get cpus: exit status 14 (75.113718ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-034342 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-034342 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 313751: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034342 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-034342 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (256.279751ms)

                                                
                                                
-- stdout --
	* [functional-034342] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:49:18.657010  313226 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:18.657176  313226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:18.657182  313226 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:18.657186  313226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:18.657435  313226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:49:18.657868  313226 out.go:368] Setting JSON to false
	I1101 09:49:18.658768  313226 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5508,"bootTime":1761985051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:49:18.658829  313226 start.go:143] virtualization:  
	I1101 09:49:18.665458  313226 out.go:179] * [functional-034342] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:49:18.668370  313226 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:49:18.668410  313226 notify.go:221] Checking for updates...
	I1101 09:49:18.673890  313226 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:18.676780  313226 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:49:18.679661  313226 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:49:18.682494  313226 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:49:18.685282  313226 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:49:18.688599  313226 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:49:18.689126  313226 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:18.735250  313226 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:49:18.735361  313226 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:49:18.838567  313226 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:49:18.828576373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:49:18.838669  313226 docker.go:319] overlay module found
	I1101 09:49:18.841934  313226 out.go:179] * Using the docker driver based on existing profile
	I1101 09:49:18.844788  313226 start.go:309] selected driver: docker
	I1101 09:49:18.844811  313226 start.go:930] validating driver "docker" against &{Name:functional-034342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-034342 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:18.844928  313226 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:49:18.848690  313226 out.go:203] 
	W1101 09:49:18.851696  313226 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 09:49:18.854655  313226 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034342 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034342 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-034342 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.351054ms)

                                                
                                                
-- stdout --
	* [functional-034342] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:49:18.467080  313180 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:18.467227  313180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:18.467238  313180 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:18.467245  313180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:18.467675  313180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:49:18.468095  313180 out.go:368] Setting JSON to false
	I1101 09:49:18.469008  313180 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5508,"bootTime":1761985051,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:49:18.469082  313180 start.go:143] virtualization:  
	I1101 09:49:18.472585  313180 out.go:179] * [functional-034342] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1101 09:49:18.476447  313180 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:49:18.476522  313180 notify.go:221] Checking for updates...
	I1101 09:49:18.482347  313180 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:18.485260  313180 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 09:49:18.488100  313180 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 09:49:18.491060  313180 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:49:18.493985  313180 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:49:18.497508  313180 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:49:18.498159  313180 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:18.525538  313180 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:49:18.525648  313180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:49:18.585427  313180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:49:18.575207445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:49:18.585554  313180 docker.go:319] overlay module found
	I1101 09:49:18.588663  313180 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 09:49:18.591378  313180 start.go:309] selected driver: docker
	I1101 09:49:18.591410  313180 start.go:930] validating driver "docker" against &{Name:functional-034342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-034342 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:18.591504  313180 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:49:18.595062  313180 out.go:203] 
	W1101 09:49:18.597849  313180 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 09:49:18.600723  313180 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [919369ab-c944-45f4-ad3c-b1c412220f33] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003764747s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-034342 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-034342 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-034342 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-034342 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a8a77fad-cc9b-4c43-88e1-8d512b09ab1c] Pending
helpers_test.go:352: "sp-pod" [a8a77fad-cc9b-4c43-88e1-8d512b09ab1c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a8a77fad-cc9b-4c43-88e1-8d512b09ab1c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003940493s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-034342 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-034342 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-034342 delete -f testdata/storage-provisioner/pod.yaml: (1.0546049s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-034342 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [abb58e8c-814e-4579-aba7-a4b18cf6c18c] Pending
helpers_test.go:352: "sp-pod" [abb58e8c-814e-4579-aba7-a4b18cf6c18c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003408012s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-034342 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh -n functional-034342 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 cp functional-034342:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd28841023/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh -n functional-034342 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh -n functional-034342 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/287135/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo cat /etc/test/nested/copy/287135/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/287135.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo cat /etc/ssl/certs/287135.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/287135.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo cat /usr/share/ca-certificates/287135.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2871352.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo cat /etc/ssl/certs/2871352.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2871352.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo cat /usr/share/ca-certificates/2871352.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-034342 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 ssh "sudo systemctl is-active docker": exit status 1 (349.57001ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 ssh "sudo systemctl is-active containerd": exit status 1 (358.252696ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-034342 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-034342 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-034342 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 309693: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-034342 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-034342 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-034342 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6c130f2f-22b6-42af-8b95-9b483ab93fc9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6c130f2f-22b6-42af-8b95-9b483ab93fc9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.008413816s
I1101 09:39:00.406814  287135 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-034342 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.106.236 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-034342 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "374.977364ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "53.1864ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "366.493629ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "51.821477ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdany-port3341577734/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761990545666862573" to /tmp/TestFunctionalparallelMountCmdany-port3341577734/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761990545666862573" to /tmp/TestFunctionalparallelMountCmdany-port3341577734/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761990545666862573" to /tmp/TestFunctionalparallelMountCmdany-port3341577734/001/test-1761990545666862573
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.56381ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:49:06.030675  287135 retry.go:31] will retry after 609.522362ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 09:49 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 09:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 09:49 test-1761990545666862573
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh cat /mount-9p/test-1761990545666862573
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-034342 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [50ced344-6926-4579-a335-40583fff5e11] Pending
helpers_test.go:352: "busybox-mount" [50ced344-6926-4579-a335-40583fff5e11] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [50ced344-6926-4579-a335-40583fff5e11] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [50ced344-6926-4579-a335-40583fff5e11] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003708555s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-034342 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdany-port3341577734/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdspecific-port1947887563/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.670091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:49:14.081101  287135 retry.go:31] will retry after 374.317374ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdspecific-port1947887563/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 ssh "sudo umount -f /mount-9p": exit status 1 (276.666514ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-034342 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdspecific-port1947887563/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2447316240/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2447316240/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2447316240/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T" /mount1: exit status 1 (638.301879ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:49:16.172696  287135 retry.go:31] will retry after 271.257045ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-034342 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2447316240/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2447316240/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2447316240/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 service list -o json
functional_test.go:1504: Took "635.812812ms" to run "out/minikube-linux-arm64 -p functional-034342 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-034342 version -o=json --components: (1.022847092s)
--- PASS: TestFunctional/parallel/Version/components (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-034342 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-034342 image ls --format short --alsologtostderr:
I1101 09:49:34.931567  315876 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:34.931845  315876 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:34.931858  315876 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:34.931864  315876 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:34.932138  315876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
I1101 09:49:34.932784  315876 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:49:34.932908  315876 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:49:34.933438  315876 cli_runner.go:164] Run: docker container inspect functional-034342 --format={{.State.Status}}
I1101 09:49:34.954010  315876 ssh_runner.go:195] Run: systemctl --version
I1101 09:49:34.954071  315876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
I1101 09:49:34.992586  315876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
I1101 09:49:35.110483  315876 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-034342 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ docker.io/library/nginx                 │ latest             │ 46fabdd7f288c │ 176MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-034342 image ls --format table --alsologtostderr:
I1101 09:49:35.798158  316139 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:35.798388  316139 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:35.798415  316139 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:35.798433  316139 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:35.798721  316139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
I1101 09:49:35.799353  316139 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:49:35.799530  316139 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:49:35.800068  316139 cli_runner.go:164] Run: docker container inspect functional-034342 --format={{.State.Status}}
I1101 09:49:35.832494  316139 ssh_runner.go:195] Run: systemctl --version
I1101 09:49:35.832552  316139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
I1101 09:49:35.855363  316139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
I1101 09:49:35.965006  316139 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-034342 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c111
2681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-mini
kube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["
registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73
e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797","repoDigests":["docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176006680"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894
772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-034342 image ls --format json --alsologtostderr:
I1101 09:49:35.507618  316069 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:35.508064  316069 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:35.508100  316069 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:35.508124  316069 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:35.508424  316069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
I1101 09:49:35.509108  316069 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:49:35.509276  316069 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:49:35.509845  316069 cli_runner.go:164] Run: docker container inspect functional-034342 --format={{.State.Status}}
I1101 09:49:35.532384  316069 ssh_runner.go:195] Run: systemctl --version
I1101 09:49:35.532494  316069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
I1101 09:49:35.560378  316069 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
I1101 09:49:35.684796  316069 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-034342 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797
repoDigests:
- docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "176006680"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-034342 image ls --format yaml --alsologtostderr:
I1101 09:49:35.224585  315971 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:35.225317  315971 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:35.225362  315971 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:35.225384  315971 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:35.225685  315971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
I1101 09:49:35.226440  315971 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:49:35.226626  315971 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:49:35.227120  315971 cli_runner.go:164] Run: docker container inspect functional-034342 --format={{.State.Status}}
I1101 09:49:35.261579  315971 ssh_runner.go:195] Run: systemctl --version
I1101 09:49:35.261631  315971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
I1101 09:49:35.281057  315971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
I1101 09:49:35.389338  315971 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034342 ssh pgrep buildkitd: exit status 1 (366.31166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image build -t localhost/my-image:functional-034342 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-034342 image build -t localhost/my-image:functional-034342 testdata/build --alsologtostderr: (3.407498637s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-034342 image build -t localhost/my-image:functional-034342 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 94f6a9681d1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-034342
--> af63e612287
Successfully tagged localhost/my-image:functional-034342
af63e6122874aa3fb88200ba99653a2997d26804ed8d63b3bbc7f310a7be98d7
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-034342 image build -t localhost/my-image:functional-034342 testdata/build --alsologtostderr:
I1101 09:49:35.635969  316096 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:35.636776  316096 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:35.636793  316096 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:35.636802  316096 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:35.637066  316096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
I1101 09:49:35.637728  316096 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:49:35.638364  316096 config.go:182] Loaded profile config "functional-034342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:49:35.638830  316096 cli_runner.go:164] Run: docker container inspect functional-034342 --format={{.State.Status}}
I1101 09:49:35.657496  316096 ssh_runner.go:195] Run: systemctl --version
I1101 09:49:35.657555  316096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034342
I1101 09:49:35.674637  316096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/functional-034342/id_rsa Username:docker}
I1101 09:49:35.781687  316096 build_images.go:162] Building image from path: /tmp/build.917668183.tar
I1101 09:49:35.781841  316096 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 09:49:35.790153  316096 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.917668183.tar
I1101 09:49:35.795170  316096 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.917668183.tar: stat -c "%s %y" /var/lib/minikube/build/build.917668183.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.917668183.tar': No such file or directory
I1101 09:49:35.795205  316096 ssh_runner.go:362] scp /tmp/build.917668183.tar --> /var/lib/minikube/build/build.917668183.tar (3072 bytes)
I1101 09:49:35.816798  316096 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.917668183
I1101 09:49:35.827628  316096 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.917668183 -xf /var/lib/minikube/build/build.917668183.tar
I1101 09:49:35.841038  316096 crio.go:315] Building image: /var/lib/minikube/build/build.917668183
I1101 09:49:35.841102  316096 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-034342 /var/lib/minikube/build/build.917668183 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1101 09:49:38.939077  316096 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-034342 /var/lib/minikube/build/build.917668183 --cgroup-manager=cgroupfs: (3.097952216s)
I1101 09:49:38.939146  316096 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.917668183
I1101 09:49:38.947562  316096 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.917668183.tar
I1101 09:49:38.955416  316096 build_images.go:218] Built localhost/my-image:functional-034342 from /tmp/build.917668183.tar
I1101 09:49:38.955449  316096 build_images.go:134] succeeded building to: functional-034342
I1101 09:49:38.955455  316096 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-034342
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image rm kicbase/echo-server:functional-034342 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-034342 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-034342
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-034342
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-034342
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1101 09:51:34.714834  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:57.786556  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m26.269205516s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (207.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 kubectl -- rollout status deployment/busybox: (4.818894504s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-cbgh5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-jcfpd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-k74d8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-cbgh5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-jcfpd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-k74d8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-cbgh5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-jcfpd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-k74d8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-cbgh5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-cbgh5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-jcfpd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-jcfpd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-k74d8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 kubectl -- exec busybox-7b57f96db7-k74d8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 node add --alsologtostderr -v 5
E1101 09:53:51.963336  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:51.969889  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:51.981366  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:52.003192  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:52.044679  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:52.126253  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:52.287834  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:52.609637  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:53.251117  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:54.532802  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:57.094426  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:54:02.216125  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:54:12.457635  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 node add --alsologtostderr -v 5: (1m0.361367946s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 status --alsologtostderr -v 5: (1.103834872s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-832582 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.085957338s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 status --output json --alsologtostderr -v 5: (1.068837803s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp testdata/cp-test.txt ha-832582:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1609765245/001/cp-test_ha-832582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582:/home/docker/cp-test.txt ha-832582-m02:/home/docker/cp-test_ha-832582_ha-832582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m02 "sudo cat /home/docker/cp-test_ha-832582_ha-832582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582:/home/docker/cp-test.txt ha-832582-m03:/home/docker/cp-test_ha-832582_ha-832582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m03 "sudo cat /home/docker/cp-test_ha-832582_ha-832582-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582:/home/docker/cp-test.txt ha-832582-m04:/home/docker/cp-test_ha-832582_ha-832582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m04 "sudo cat /home/docker/cp-test_ha-832582_ha-832582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp testdata/cp-test.txt ha-832582-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1609765245/001/cp-test_ha-832582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m02:/home/docker/cp-test.txt ha-832582:/home/docker/cp-test_ha-832582-m02_ha-832582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582 "sudo cat /home/docker/cp-test_ha-832582-m02_ha-832582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m02:/home/docker/cp-test.txt ha-832582-m03:/home/docker/cp-test_ha-832582-m02_ha-832582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m03 "sudo cat /home/docker/cp-test_ha-832582-m02_ha-832582-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m02:/home/docker/cp-test.txt ha-832582-m04:/home/docker/cp-test_ha-832582-m02_ha-832582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m04 "sudo cat /home/docker/cp-test_ha-832582-m02_ha-832582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp testdata/cp-test.txt ha-832582-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1609765245/001/cp-test_ha-832582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m03 "sudo cat /home/docker/cp-test.txt"
E1101 09:54:32.939238  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m03:/home/docker/cp-test.txt ha-832582:/home/docker/cp-test_ha-832582-m03_ha-832582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582 "sudo cat /home/docker/cp-test_ha-832582-m03_ha-832582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m03:/home/docker/cp-test.txt ha-832582-m02:/home/docker/cp-test_ha-832582-m03_ha-832582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m02 "sudo cat /home/docker/cp-test_ha-832582-m03_ha-832582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m03:/home/docker/cp-test.txt ha-832582-m04:/home/docker/cp-test_ha-832582-m03_ha-832582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m04 "sudo cat /home/docker/cp-test_ha-832582-m03_ha-832582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp testdata/cp-test.txt ha-832582-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1609765245/001/cp-test_ha-832582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582:/home/docker/cp-test_ha-832582-m04_ha-832582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582 "sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582-m02:/home/docker/cp-test_ha-832582-m04_ha-832582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m02 "sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 cp ha-832582-m04:/home/docker/cp-test.txt ha-832582-m03:/home/docker/cp-test_ha-832582-m04_ha-832582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 ssh -n ha-832582-m03 "sudo cat /home/docker/cp-test_ha-832582-m04_ha-832582-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 node stop m02 --alsologtostderr -v 5: (12.09668637s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-832582 status --alsologtostderr -v 5: exit status 7 (797.096915ms)

                                                
                                                
-- stdout --
	ha-832582
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-832582-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-832582-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-832582-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:54:53.367355  330926 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:54:53.367589  330926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:53.367622  330926 out.go:374] Setting ErrFile to fd 2...
	I1101 09:54:53.367643  330926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:53.367921  330926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:54:53.368155  330926 out.go:368] Setting JSON to false
	I1101 09:54:53.368225  330926 mustload.go:66] Loading cluster: ha-832582
	I1101 09:54:53.368291  330926 notify.go:221] Checking for updates...
	I1101 09:54:53.368732  330926 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:54:53.368774  330926 status.go:174] checking status of ha-832582 ...
	I1101 09:54:53.369635  330926 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:54:53.392077  330926 status.go:371] ha-832582 host status = "Running" (err=<nil>)
	I1101 09:54:53.392099  330926 host.go:66] Checking if "ha-832582" exists ...
	I1101 09:54:53.392389  330926 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582
	I1101 09:54:53.426300  330926 host.go:66] Checking if "ha-832582" exists ...
	I1101 09:54:53.426729  330926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:54:53.426788  330926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582
	I1101 09:54:53.453848  330926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582/id_rsa Username:docker}
	I1101 09:54:53.555693  330926 ssh_runner.go:195] Run: systemctl --version
	I1101 09:54:53.562681  330926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:54:53.576007  330926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:54:53.639370  330926 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-01 09:54:53.629649927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:54:53.639905  330926 kubeconfig.go:125] found "ha-832582" server: "https://192.168.49.254:8443"
	I1101 09:54:53.639962  330926 api_server.go:166] Checking apiserver status ...
	I1101 09:54:53.640012  330926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:54:53.652241  330926 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1236/cgroup
	I1101 09:54:53.660692  330926 api_server.go:182] apiserver freezer: "10:freezer:/docker/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/crio/crio-9456fa2da3695e668b2d656df3d2c03ec2cb866230509a87ab1ff9cbe240f39f"
	I1101 09:54:53.660756  330926 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e5a947146cd529b40fadd60c6da346c5c5824f35952a887886b172119356c737/crio/crio-9456fa2da3695e668b2d656df3d2c03ec2cb866230509a87ab1ff9cbe240f39f/freezer.state
	I1101 09:54:53.668712  330926 api_server.go:204] freezer state: "THAWED"
	I1101 09:54:53.668738  330926 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 09:54:53.677099  330926 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 09:54:53.677128  330926 status.go:463] ha-832582 apiserver status = Running (err=<nil>)
	I1101 09:54:53.677139  330926 status.go:176] ha-832582 status: &{Name:ha-832582 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:54:53.677158  330926 status.go:174] checking status of ha-832582-m02 ...
	I1101 09:54:53.677497  330926 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:54:53.699486  330926 status.go:371] ha-832582-m02 host status = "Stopped" (err=<nil>)
	I1101 09:54:53.699508  330926 status.go:384] host is not running, skipping remaining checks
	I1101 09:54:53.699515  330926 status.go:176] ha-832582-m02 status: &{Name:ha-832582-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:54:53.699536  330926 status.go:174] checking status of ha-832582-m03 ...
	I1101 09:54:53.699878  330926 cli_runner.go:164] Run: docker container inspect ha-832582-m03 --format={{.State.Status}}
	I1101 09:54:53.717363  330926 status.go:371] ha-832582-m03 host status = "Running" (err=<nil>)
	I1101 09:54:53.717386  330926 host.go:66] Checking if "ha-832582-m03" exists ...
	I1101 09:54:53.717683  330926 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m03
	I1101 09:54:53.739830  330926 host.go:66] Checking if "ha-832582-m03" exists ...
	I1101 09:54:53.740161  330926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:54:53.740210  330926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m03
	I1101 09:54:53.757664  330926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m03/id_rsa Username:docker}
	I1101 09:54:53.867289  330926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:54:53.881436  330926 kubeconfig.go:125] found "ha-832582" server: "https://192.168.49.254:8443"
	I1101 09:54:53.881469  330926 api_server.go:166] Checking apiserver status ...
	I1101 09:54:53.881511  330926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:54:53.893142  330926 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	I1101 09:54:53.908330  330926 api_server.go:182] apiserver freezer: "10:freezer:/docker/9630701aaf16582a98c56d41e159b4442f92e805cbe673dfdeb4afe15c29dbc0/crio/crio-5015bd4d10aba91a3d8d541376df567dea2481c96c2bd8d557e14631a5e3a5a1"
	I1101 09:54:53.908413  330926 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9630701aaf16582a98c56d41e159b4442f92e805cbe673dfdeb4afe15c29dbc0/crio/crio-5015bd4d10aba91a3d8d541376df567dea2481c96c2bd8d557e14631a5e3a5a1/freezer.state
	I1101 09:54:53.920611  330926 api_server.go:204] freezer state: "THAWED"
	I1101 09:54:53.920642  330926 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 09:54:53.929273  330926 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 09:54:53.929356  330926 status.go:463] ha-832582-m03 apiserver status = Running (err=<nil>)
	I1101 09:54:53.929380  330926 status.go:176] ha-832582-m03 status: &{Name:ha-832582-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:54:53.929429  330926 status.go:174] checking status of ha-832582-m04 ...
	I1101 09:54:53.929881  330926 cli_runner.go:164] Run: docker container inspect ha-832582-m04 --format={{.State.Status}}
	I1101 09:54:53.945973  330926 status.go:371] ha-832582-m04 host status = "Running" (err=<nil>)
	I1101 09:54:53.945998  330926 host.go:66] Checking if "ha-832582-m04" exists ...
	I1101 09:54:53.946289  330926 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-832582-m04
	I1101 09:54:53.962987  330926 host.go:66] Checking if "ha-832582-m04" exists ...
	I1101 09:54:53.963291  330926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:54:53.963346  330926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-832582-m04
	I1101 09:54:53.982430  330926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/ha-832582-m04/id_rsa Username:docker}
	I1101 09:54:54.091804  330926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:54:54.105411  330926 status.go:176] ha-832582-m04 status: &{Name:ha-832582-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 node start m02 --alsologtostderr -v 5
E1101 09:55:13.901107  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 node start m02 --alsologtostderr -v 5: (27.699736015s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 status --alsologtostderr -v 5: (1.427029716s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.456266106s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (118.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 stop --alsologtostderr -v 5: (21.663878464s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 start --wait true --alsologtostderr -v 5
E1101 09:56:34.715423  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:56:35.822922  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 start --wait true --alsologtostderr -v 5: (1m36.916944962s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (118.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 node delete m03 --alsologtostderr -v 5: (11.085267296s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (25.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-832582 stop --alsologtostderr -v 5: (25.465693728s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-832582 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-832582 status --alsologtostderr -v 5: exit status 7 (119.501994ms)

                                                
                                                
-- stdout --
	ha-832582
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-832582-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-832582-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:58:02.798897  342740 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:58:02.799140  342740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.799168  342740 out.go:374] Setting ErrFile to fd 2...
	I1101 09:58:02.799186  342740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:02.799454  342740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 09:58:02.799702  342740 out.go:368] Setting JSON to false
	I1101 09:58:02.799786  342740 mustload.go:66] Loading cluster: ha-832582
	I1101 09:58:02.799865  342740 notify.go:221] Checking for updates...
	I1101 09:58:02.800251  342740 config.go:182] Loaded profile config "ha-832582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:02.800285  342740 status.go:174] checking status of ha-832582 ...
	I1101 09:58:02.800837  342740 cli_runner.go:164] Run: docker container inspect ha-832582 --format={{.State.Status}}
	I1101 09:58:02.822030  342740 status.go:371] ha-832582 host status = "Stopped" (err=<nil>)
	I1101 09:58:02.822057  342740 status.go:384] host is not running, skipping remaining checks
	I1101 09:58:02.822064  342740 status.go:176] ha-832582 status: &{Name:ha-832582 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:58:02.822095  342740 status.go:174] checking status of ha-832582-m02 ...
	I1101 09:58:02.822439  342740 cli_runner.go:164] Run: docker container inspect ha-832582-m02 --format={{.State.Status}}
	I1101 09:58:02.846730  342740 status.go:371] ha-832582-m02 host status = "Stopped" (err=<nil>)
	I1101 09:58:02.846754  342740 status.go:384] host is not running, skipping remaining checks
	I1101 09:58:02.846761  342740 status.go:176] ha-832582-m02 status: &{Name:ha-832582-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:58:02.846780  342740 status.go:174] checking status of ha-832582-m04 ...
	I1101 09:58:02.847093  342740 cli_runner.go:164] Run: docker container inspect ha-832582-m04 --format={{.State.Status}}
	I1101 09:58:02.868836  342740 status.go:371] ha-832582-m04 host status = "Stopped" (err=<nil>)
	I1101 09:58:02.868864  342740 status.go:384] host is not running, skipping remaining checks
	I1101 09:58:02.868871  342740 status.go:176] ha-832582-m04 status: &{Name:ha-832582-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (25.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-263903 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1101 10:06:34.716726  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-263903 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.607643872s)
--- PASS: TestJSONOutput/start/Command (80.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-263903 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-263903 --output=json --user=testUser: (5.829750552s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-723549 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-723549 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (129.855999ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"816e9da2-7cc6-4e81-80b7-640a293ff3cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-723549] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"88f1b015-5a71-4f40-9f3b-bb15fc06bea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21833"}}
	{"specversion":"1.0","id":"de785fc5-aeb2-4a58-b34f-680464990175","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4a4a3767-e225-4d56-8a2d-696ae305d330","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig"}}
	{"specversion":"1.0","id":"26d3903d-8906-4e66-a8c6-827dcf3d257d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube"}}
	{"specversion":"1.0","id":"90240104-8ed8-4e9a-8bf4-9b118c1b10c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1f7bb44b-3d2e-4845-b243-cc80d472cba8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"03ce0e9f-8423-4c98-8618-31daf57d6c10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-723549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-723549
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-878659 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-878659 --network=: (36.467422846s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-878659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-878659
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-878659: (2.251309702s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.75s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-421868 --network=bridge
E1101 10:08:51.965871  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-421868 --network=bridge: (35.877637255s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-421868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-421868
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-421868: (2.152998814s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.06s)

                                                
                                    
x
+
TestKicExistingNetwork (35.92s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1101 10:09:17.535342  287135 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 10:09:17.553759  287135 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 10:09:17.553843  287135 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1101 10:09:17.553864  287135 cli_runner.go:164] Run: docker network inspect existing-network
W1101 10:09:17.568730  287135 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1101 10:09:17.568759  287135 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1101 10:09:17.568792  287135 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1101 10:09:17.568901  287135 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 10:09:17.585518  287135 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b4026c1b0063 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:ce:bd:30:c3:d1} reservation:<nil>}
I1101 10:09:17.586021  287135 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002059940}
I1101 10:09:17.586050  287135 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1101 10:09:17.586107  287135 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1101 10:09:17.647571  287135 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-601454 --network=existing-network
E1101 10:09:37.789851  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-601454 --network=existing-network: (33.69534563s)
helpers_test.go:175: Cleaning up "existing-network-601454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-601454
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-601454: (2.077733345s)
I1101 10:09:53.436968  287135 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.92s)

                                                
                                    
x
+
TestKicCustomSubnet (36.16s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-101633 --subnet=192.168.60.0/24
E1101 10:10:15.029888  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-101633 --subnet=192.168.60.0/24: (33.952130274s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-101633 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-101633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-101633
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-101633: (2.172959242s)
--- PASS: TestKicCustomSubnet (36.16s)

                                                
                                    
x
+
TestKicStaticIP (38.85s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-384400 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-384400 --static-ip=192.168.200.200: (36.515315614s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-384400 ip
helpers_test.go:175: Cleaning up "static-ip-384400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-384400
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-384400: (2.184863198s)
--- PASS: TestKicStaticIP (38.85s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-264855 --driver=docker  --container-runtime=crio
E1101 10:11:34.715464  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-264855 --driver=docker  --container-runtime=crio: (29.626879891s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-267620 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-267620 --driver=docker  --container-runtime=crio: (32.833161323s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-264855
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-267620
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-267620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-267620
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-267620: (2.098852131s)
helpers_test.go:175: Cleaning up "first-264855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-264855
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-264855: (2.070072749s)
--- PASS: TestMinikubeProfile (68.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-815131 --memory=3072 --mount-string /tmp/TestMountStartserial3156751223/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-815131 --memory=3072 --mount-string /tmp/TestMountStartserial3156751223/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.916818944s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-815131 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-816993 --memory=3072 --mount-string /tmp/TestMountStartserial3156751223/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-816993 --memory=3072 --mount-string /tmp/TestMountStartserial3156751223/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.316852365s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-816993 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-815131 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-815131 --alsologtostderr -v=5: (1.724780421s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-816993 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-816993
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-816993: (1.289421106s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-816993
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-816993: (6.774636001s)
--- PASS: TestMountStart/serial/RestartStopped (7.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-816993 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (141.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-305203 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 10:13:51.963070  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-305203 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m20.92299573s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (141.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-305203 -- rollout status deployment/busybox: (3.258497017s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- exec busybox-7b57f96db7-9pjsw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- exec busybox-7b57f96db7-h22h2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- exec busybox-7b57f96db7-9pjsw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- exec busybox-7b57f96db7-h22h2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- exec busybox-7b57f96db7-9pjsw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- exec busybox-7b57f96db7-h22h2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- exec busybox-7b57f96db7-9pjsw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- exec busybox-7b57f96db7-9pjsw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- exec busybox-7b57f96db7-h22h2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305203 -- exec busybox-7b57f96db7-h22h2 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-305203 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-305203 -v=5 --alsologtostderr: (58.858415065s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-305203 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp testdata/cp-test.txt multinode-305203:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp multinode-305203:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1102534694/001/cp-test_multinode-305203.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp multinode-305203:/home/docker/cp-test.txt multinode-305203-m02:/home/docker/cp-test_multinode-305203_multinode-305203-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m02 "sudo cat /home/docker/cp-test_multinode-305203_multinode-305203-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp multinode-305203:/home/docker/cp-test.txt multinode-305203-m03:/home/docker/cp-test_multinode-305203_multinode-305203-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m03 "sudo cat /home/docker/cp-test_multinode-305203_multinode-305203-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp testdata/cp-test.txt multinode-305203-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp multinode-305203-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1102534694/001/cp-test_multinode-305203-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp multinode-305203-m02:/home/docker/cp-test.txt multinode-305203:/home/docker/cp-test_multinode-305203-m02_multinode-305203.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203 "sudo cat /home/docker/cp-test_multinode-305203-m02_multinode-305203.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp multinode-305203-m02:/home/docker/cp-test.txt multinode-305203-m03:/home/docker/cp-test_multinode-305203-m02_multinode-305203-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m03 "sudo cat /home/docker/cp-test_multinode-305203-m02_multinode-305203-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp testdata/cp-test.txt multinode-305203-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp multinode-305203-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1102534694/001/cp-test_multinode-305203-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp multinode-305203-m03:/home/docker/cp-test.txt multinode-305203:/home/docker/cp-test_multinode-305203-m03_multinode-305203.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203 "sudo cat /home/docker/cp-test_multinode-305203-m03_multinode-305203.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 cp multinode-305203-m03:/home/docker/cp-test.txt multinode-305203-m02:/home/docker/cp-test_multinode-305203-m03_multinode-305203-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 ssh -n multinode-305203-m02 "sudo cat /home/docker/cp-test_multinode-305203-m03_multinode-305203-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-305203 node stop m03: (1.314655056s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-305203 status: exit status 7 (540.498007ms)

                                                
                                                
-- stdout --
	multinode-305203
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-305203-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-305203-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-305203 status --alsologtostderr: exit status 7 (528.968116ms)

                                                
                                                
-- stdout --
	multinode-305203
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-305203-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-305203-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:16:28.993060  389968 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:16:28.993236  389968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:28.993262  389968 out.go:374] Setting ErrFile to fd 2...
	I1101 10:16:28.993280  389968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:28.993601  389968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:16:28.993897  389968 out.go:368] Setting JSON to false
	I1101 10:16:28.993962  389968 mustload.go:66] Loading cluster: multinode-305203
	I1101 10:16:28.994038  389968 notify.go:221] Checking for updates...
	I1101 10:16:28.994464  389968 config.go:182] Loaded profile config "multinode-305203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:28.994500  389968 status.go:174] checking status of multinode-305203 ...
	I1101 10:16:28.995061  389968 cli_runner.go:164] Run: docker container inspect multinode-305203 --format={{.State.Status}}
	I1101 10:16:29.014872  389968 status.go:371] multinode-305203 host status = "Running" (err=<nil>)
	I1101 10:16:29.014896  389968 host.go:66] Checking if "multinode-305203" exists ...
	I1101 10:16:29.015313  389968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-305203
	I1101 10:16:29.039125  389968 host.go:66] Checking if "multinode-305203" exists ...
	I1101 10:16:29.039435  389968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:29.039497  389968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-305203
	I1101 10:16:29.060863  389968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33264 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/multinode-305203/id_rsa Username:docker}
	I1101 10:16:29.169358  389968 ssh_runner.go:195] Run: systemctl --version
	I1101 10:16:29.176222  389968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:16:29.190008  389968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:16:29.242160  389968 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 10:16:29.232703843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:16:29.242714  389968 kubeconfig.go:125] found "multinode-305203" server: "https://192.168.67.2:8443"
	I1101 10:16:29.242746  389968 api_server.go:166] Checking apiserver status ...
	I1101 10:16:29.242794  389968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:29.255176  389968 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1101 10:16:29.264166  389968 api_server.go:182] apiserver freezer: "10:freezer:/docker/298ace4e8480259637fb79c0a1cfc15fe071ed1eae0329beb036a40e3f5b7e62/crio/crio-22d3a03f20d01143d651b83e4a5b1dbf475eb56ff596650d3f1be7535c493341"
	I1101 10:16:29.264237  389968 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/298ace4e8480259637fb79c0a1cfc15fe071ed1eae0329beb036a40e3f5b7e62/crio/crio-22d3a03f20d01143d651b83e4a5b1dbf475eb56ff596650d3f1be7535c493341/freezer.state
	I1101 10:16:29.272145  389968 api_server.go:204] freezer state: "THAWED"
	I1101 10:16:29.272172  389968 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 10:16:29.280480  389968 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 10:16:29.280506  389968 status.go:463] multinode-305203 apiserver status = Running (err=<nil>)
	I1101 10:16:29.280519  389968 status.go:176] multinode-305203 status: &{Name:multinode-305203 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:16:29.280535  389968 status.go:174] checking status of multinode-305203-m02 ...
	I1101 10:16:29.280856  389968 cli_runner.go:164] Run: docker container inspect multinode-305203-m02 --format={{.State.Status}}
	I1101 10:16:29.299021  389968 status.go:371] multinode-305203-m02 host status = "Running" (err=<nil>)
	I1101 10:16:29.299049  389968 host.go:66] Checking if "multinode-305203-m02" exists ...
	I1101 10:16:29.299334  389968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-305203-m02
	I1101 10:16:29.316502  389968 host.go:66] Checking if "multinode-305203-m02" exists ...
	I1101 10:16:29.316851  389968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:29.316900  389968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-305203-m02
	I1101 10:16:29.334591  389968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33269 SSHKeyPath:/home/jenkins/minikube-integration/21833-285274/.minikube/machines/multinode-305203-m02/id_rsa Username:docker}
	I1101 10:16:29.439221  389968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:16:29.452363  389968 status.go:176] multinode-305203-m02 status: &{Name:multinode-305203-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:16:29.452408  389968 status.go:174] checking status of multinode-305203-m03 ...
	I1101 10:16:29.452712  389968 cli_runner.go:164] Run: docker container inspect multinode-305203-m03 --format={{.State.Status}}
	I1101 10:16:29.469653  389968 status.go:371] multinode-305203-m03 host status = "Stopped" (err=<nil>)
	I1101 10:16:29.469673  389968 status.go:384] host is not running, skipping remaining checks
	I1101 10:16:29.469680  389968 status.go:176] multinode-305203-m03 status: &{Name:multinode-305203-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 node start m03 -v=5 --alsologtostderr
E1101 10:16:34.714645  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-305203 node start m03 -v=5 --alsologtostderr: (7.568329369s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-305203
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-305203
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-305203: (25.126804098s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-305203 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-305203 --wait=true -v=5 --alsologtostderr: (48.308196341s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-305203
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-305203 node delete m03: (4.925085861s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-305203 stop: (23.752472031s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-305203 status: exit status 7 (91.966615ms)

                                                
                                                
-- stdout --
	multinode-305203
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-305203-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-305203 status --alsologtostderr: exit status 7 (100.097519ms)

                                                
                                                
-- stdout --
	multinode-305203
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-305203-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:18:20.871187  397764 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:18:20.871381  397764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:18:20.871412  397764 out.go:374] Setting ErrFile to fd 2...
	I1101 10:18:20.871437  397764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:18:20.871807  397764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:18:20.872108  397764 out.go:368] Setting JSON to false
	I1101 10:18:20.872167  397764 mustload.go:66] Loading cluster: multinode-305203
	I1101 10:18:20.872861  397764 config.go:182] Loaded profile config "multinode-305203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:18:20.872903  397764 status.go:174] checking status of multinode-305203 ...
	I1101 10:18:20.873665  397764 notify.go:221] Checking for updates...
	I1101 10:18:20.874406  397764 cli_runner.go:164] Run: docker container inspect multinode-305203 --format={{.State.Status}}
	I1101 10:18:20.892224  397764 status.go:371] multinode-305203 host status = "Stopped" (err=<nil>)
	I1101 10:18:20.892247  397764 status.go:384] host is not running, skipping remaining checks
	I1101 10:18:20.892254  397764 status.go:176] multinode-305203 status: &{Name:multinode-305203 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:18:20.892295  397764 status.go:174] checking status of multinode-305203-m02 ...
	I1101 10:18:20.892605  397764 cli_runner.go:164] Run: docker container inspect multinode-305203-m02 --format={{.State.Status}}
	I1101 10:18:20.921450  397764 status.go:371] multinode-305203-m02 host status = "Stopped" (err=<nil>)
	I1101 10:18:20.921471  397764 status.go:384] host is not running, skipping remaining checks
	I1101 10:18:20.921483  397764 status.go:176] multinode-305203-m02 status: &{Name:multinode-305203-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-305203 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 10:18:51.962732  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-305203 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.481372309s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305203 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-305203
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-305203-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-305203-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.701191ms)

                                                
                                                
-- stdout --
	* [multinode-305203-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-305203-m02' is duplicated with machine name 'multinode-305203-m02' in profile 'multinode-305203'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-305203-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-305203-m03 --driver=docker  --container-runtime=crio: (34.270991268s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-305203
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-305203: exit status 80 (344.379478ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-305203 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-305203-m03 already exists in multinode-305203-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-305203-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-305203-m03: (2.085572437s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.84s)

                                                
                                    
x
+
TestPreload (126.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-032256 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-032256 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m4.021901539s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-032256 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-032256 image pull gcr.io/k8s-minikube/busybox: (2.350223384s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-032256
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-032256: (5.896267849s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-032256 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1101 10:21:34.715456  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-032256 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.183220352s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-032256 image list
helpers_test.go:175: Cleaning up "test-preload-032256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-032256
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-032256: (2.465696236s)
--- PASS: TestPreload (126.16s)

                                                
                                    
x
+
TestScheduledStopUnix (109.3s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-832369 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-832369 --memory=3072 --driver=docker  --container-runtime=crio: (33.347395034s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-832369 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-832369 -n scheduled-stop-832369
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-832369 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 10:22:35.293273  287135 retry.go:31] will retry after 141.709µs: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.294444  287135 retry.go:31] will retry after 141.057µs: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.295633  287135 retry.go:31] will retry after 310.303µs: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.296761  287135 retry.go:31] will retry after 454.186µs: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.297847  287135 retry.go:31] will retry after 443.775µs: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.298973  287135 retry.go:31] will retry after 535.187µs: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.300042  287135 retry.go:31] will retry after 754.978µs: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.301162  287135 retry.go:31] will retry after 1.354424ms: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.303338  287135 retry.go:31] will retry after 2.843552ms: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.306512  287135 retry.go:31] will retry after 2.831662ms: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.309681  287135 retry.go:31] will retry after 3.97424ms: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.313880  287135 retry.go:31] will retry after 6.1971ms: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.321092  287135 retry.go:31] will retry after 10.996445ms: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.332862  287135 retry.go:31] will retry after 10.580995ms: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.344509  287135 retry.go:31] will retry after 39.923929ms: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
I1101 10:22:35.384746  287135 retry.go:31] will retry after 38.558131ms: open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/scheduled-stop-832369/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-832369 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-832369 -n scheduled-stop-832369
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-832369
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-832369 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-832369
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-832369: exit status 7 (72.136285ms)

                                                
                                                
-- stdout --
	scheduled-stop-832369
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-832369 -n scheduled-stop-832369
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-832369 -n scheduled-stop-832369: exit status 7 (71.576007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-832369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-832369
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-832369: (4.318198404s)
--- PASS: TestScheduledStopUnix (109.30s)

                                                
                                    
x
+
TestInsufficientStorage (12.57s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-461119 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1101 10:23:51.963070  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-461119 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.943543754s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f342c78-9856-44c6-8016-e51c79daa59e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-461119] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"74946367-e0b4-425e-9911-e30a0417df9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21833"}}
	{"specversion":"1.0","id":"cd584c52-c91b-4566-8402-fa7b30e5477e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b6189b4-ffa3-410f-8ef0-c89bc8fd5f9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig"}}
	{"specversion":"1.0","id":"c7fc1f33-b095-4185-a087-2c0c578add4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube"}}
	{"specversion":"1.0","id":"f462dcd2-52d8-41d4-8361-d6d214cb0191","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"cab77865-77eb-4db1-a907-63e5d43340f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f7ccfbb4-795e-4903-a63c-2ae2538c1d50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0267fb4c-c611-4efd-a5e2-813581eb337c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"50bc2a0d-8005-42ca-8e63-2b990921f54a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2527676-db04-4651-a47d-74e7b6b63141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"964bfd10-2005-45b1-9480-bcf97b72caa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-461119\" primary control-plane node in \"insufficient-storage-461119\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"85788579-30fb-46a3-9353-6bbc7875e5d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"28dd31bf-a981-4937-a706-ee48f4da8a5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d255a2e-68a6-43d5-a88d-907f10d18379","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-461119 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-461119 --output=json --layout=cluster: exit status 7 (328.613083ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-461119","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-461119","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 10:24:00.977293  413956 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-461119" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-461119 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-461119 --output=json --layout=cluster: exit status 7 (321.975576ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-461119","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-461119","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 10:24:01.299093  414023 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-461119" does not appear in /home/jenkins/minikube-integration/21833-285274/kubeconfig
	E1101 10:24:01.310738  414023 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/insufficient-storage-461119/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-461119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-461119
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-461119: (1.97633465s)
--- PASS: TestInsufficientStorage (12.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.788038980 start -p running-upgrade-645343 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.788038980 start -p running-upgrade-645343 --memory=3072 --vm-driver=docker  --container-runtime=crio: (30.078720728s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-645343 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-645343 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.079636797s)
helpers_test.go:175: Cleaning up "running-upgrade-645343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-645343
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-645343: (1.983882427s)
--- PASS: TestRunningBinaryUpgrade (50.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (212.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.006414478s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-683031
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-683031: (1.561987623s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-683031 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-683031 status --format={{.Host}}: exit status 7 (117.632214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m12.722305006s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-683031 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (112.752179ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-683031] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-683031
	    minikube start -p kubernetes-upgrade-683031 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6830312 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-683031 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-683031 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.667971336s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-683031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-683031
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-683031: (2.217896745s)
--- PASS: TestKubernetesUpgrade (212.53s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.78901966 start -p missing-upgrade-843745 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.78901966 start -p missing-upgrade-843745 --memory=3072 --driver=docker  --container-runtime=crio: (1m3.201386851s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-843745
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-843745
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-843745 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-843745 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.817060896s)
helpers_test.go:175: Cleaning up "missing-upgrade-843745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-843745
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-843745: (2.100592517s)
--- PASS: TestMissingContainerUpgrade (116.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-180480 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-180480 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (108.102427ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-180480] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-180480 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-180480 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.402345953s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-180480 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (59.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-180480 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-180480 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (57.121913678s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-180480 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-180480 status -o json: exit status 2 (430.235805ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-180480","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-180480
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-180480: (2.42850154s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (59.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-180480 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-180480 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (11.265683022s)
--- PASS: TestNoKubernetes/serial/Start (11.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-180480 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-180480 "sudo systemctl is-active --quiet service kubelet": exit status 1 (383.403262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-180480
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-180480: (1.390914738s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-180480 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-180480 --driver=docker  --container-runtime=crio: (7.53667072s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-180480 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-180480 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.896132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2643749833 start -p stopped-upgrade-261821 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1101 10:26:17.791563  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:26:34.714579  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2643749833 start -p stopped-upgrade-261821 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.27187709s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2643749833 -p stopped-upgrade-261821 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2643749833 -p stopped-upgrade-261821 stop: (1.317685891s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-261821 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1101 10:26:55.031346  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-261821 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.95052889s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-261821
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-261821: (1.291527984s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestPause/serial/Start (84.59s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-197523 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1101 10:28:51.962924  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-197523 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.587286958s)
--- PASS: TestPause/serial/Start (84.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.57s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-197523 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-197523 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.540359888s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-220636 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-220636 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (280.988837ms)

                                                
                                                
-- stdout --
	* [false-220636] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:30:18.503715  448123 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:30:18.503961  448123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:30:18.503990  448123 out.go:374] Setting ErrFile to fd 2...
	I1101 10:30:18.504011  448123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:30:18.504319  448123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-285274/.minikube/bin
	I1101 10:30:18.504810  448123 out.go:368] Setting JSON to false
	I1101 10:30:18.505832  448123 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7968,"bootTime":1761985051,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:30:18.505935  448123 start.go:143] virtualization:  
	I1101 10:30:18.510916  448123 out.go:179] * [false-220636] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:30:18.513974  448123 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:30:18.514141  448123 notify.go:221] Checking for updates...
	I1101 10:30:18.519878  448123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:30:18.522715  448123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-285274/kubeconfig
	I1101 10:30:18.525832  448123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-285274/.minikube
	I1101 10:30:18.529317  448123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:30:18.532338  448123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:30:18.535801  448123 config.go:182] Loaded profile config "force-systemd-env-065424": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:30:18.535914  448123 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:30:18.574603  448123 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:30:18.574741  448123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:30:18.686347  448123 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2025-11-01 10:30:18.661925795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:30:18.686462  448123 docker.go:319] overlay module found
	I1101 10:30:18.691444  448123 out.go:179] * Using the docker driver based on user configuration
	I1101 10:30:18.694471  448123 start.go:309] selected driver: docker
	I1101 10:30:18.694492  448123 start.go:930] validating driver "docker" against <nil>
	I1101 10:30:18.694507  448123 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:30:18.698371  448123 out.go:203] 
	W1101 10:30:18.701546  448123 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 10:30:18.704563  448123 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-220636 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-220636" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-220636

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-220636"

                                                
                                                
----------------------- debugLogs end: false-220636 [took: 5.121933606s] --------------------------------
helpers_test.go:175: Cleaning up "false-220636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-220636
--- PASS: TestNetworkPlugins/group/false (5.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m3.122827999s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-180313 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e735f534-a1e8-4e99-b151-9a25498823c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e735f534-a1e8-4e99-b151-9a25498823c7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003819158s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-180313 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-180313 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-180313 --alsologtostderr -v=3: (12.057246279s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180313 -n old-k8s-version-180313
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180313 -n old-k8s-version-180313: exit status 7 (73.468185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-180313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1101 10:33:51.962574  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-180313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.068483942s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180313 -n old-k8s-version-180313
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wt2nm" [954439ef-73b3-44b2-bf87-2f7761a1c85b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004428748s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wt2nm" [954439ef-73b3-44b2-bf87-2f7761a1c85b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003812441s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-180313 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-180313 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m14.692198124s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m28.58933954s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-170467 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [63a3bfba-fa06-422e-9226-ff614dc0a6b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [63a3bfba-fa06-422e-9226-ff614dc0a6b5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003456808s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-170467 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-170467 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-170467 --alsologtostderr -v=3: (12.064150956s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-618070 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f0e3261c-8c25-4d4b-a969-0f9698b1e429] Pending
helpers_test.go:352: "busybox" [f0e3261c-8c25-4d4b-a969-0f9698b1e429] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f0e3261c-8c25-4d4b-a969-0f9698b1e429] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004979252s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-618070 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-170467 -n no-preload-170467
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-170467 -n no-preload-170467: exit status 7 (73.349021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-170467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-170467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.647676392s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-170467 -n no-preload-170467
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-618070 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-618070 --alsologtostderr -v=3: (12.419895939s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-618070 -n embed-certs-618070
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-618070 -n embed-certs-618070: exit status 7 (102.801843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-618070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (61.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:36:34.715423  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-618070 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m0.525876816s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-618070 -n embed-certs-618070
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (61.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k7scm" [f3881c3b-3785-428f-b5cc-cb419961b2a2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004087908s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k7scm" [f3881c3b-3785-428f-b5cc-cb419961b2a2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004963313s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-170467 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-170467 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.5863291s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h8dsr" [8936c3f0-ba9d-4810-aab8-12f7e79df6f0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004547494s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h8dsr" [8936c3f0-ba9d-4810-aab8-12f7e79df6f0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005124374s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-618070 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-618070 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:37:55.650554  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:37:58.213341  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:03.334594  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:13.576238  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:34.057753  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.430971063s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-761749 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-761749 --alsologtostderr -v=3: (1.353778516s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-761749 -n newest-cni-761749
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-761749 -n newest-cni-761749: exit status 7 (74.526345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-761749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-761749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.304111503s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-761749 -n newest-cni-761749
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-245904 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [449ee4be-9b51-4739-a427-f668f7aa9729] Pending
helpers_test.go:352: "busybox" [449ee4be-9b51-4739-a427-f668f7aa9729] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1101 10:38:51.963031  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [449ee4be-9b51-4739-a427-f668f7aa9729] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004556216s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-245904 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-761749 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-245904 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-245904 --alsologtostderr -v=3: (12.302766053s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.415783308s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904: exit status 7 (70.296318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-245904 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:39:15.021895  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-245904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.32342509s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-245904 -n default-k8s-diff-port-245904
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l727q" [b29821b8-c8ed-4661-be4e-54b3ffcd852b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003652939s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l727q" [b29821b8-c8ed-4661-be4e-54b3ffcd852b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003317957s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-245904 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-245904 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m29.791820736s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-220636 "pgrep -a kubelet"
I1101 10:40:32.066559  287135 config.go:182] Loaded profile config "auto-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-220636 replace --force -f testdata/netcat-deployment.yaml
I1101 10:40:32.422323  287135 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qjxpx" [3d0141dd-85b8-470a-9656-71692ea1e07e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:40:36.944097  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qjxpx" [3d0141dd-85b8-470a-9656-71692ea1e07e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004498601s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-220636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1101 10:41:28.314732  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:41:34.715473  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m1.079498173s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-7hh5c" [aaf298a7-5787-4fa8-96e6-3e2bb78444fc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003539003s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-220636 "pgrep -a kubelet"
I1101 10:42:06.480992  287135 config.go:182] Loaded profile config "kindnet-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-220636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5rz2d" [130b34d0-eca7-4972-8360-78a71b437122] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:42:09.276079  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-5rz2d" [130b34d0-eca7-4972-8360-78a71b437122] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.009462421s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-6hgt5" [6a9cb541-a294-4410-9d43-e709878dc2f4] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-6hgt5" [6a9cb541-a294-4410-9d43-e709878dc2f4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004242081s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-220636 "pgrep -a kubelet"
I1101 10:42:17.606866  287135 config.go:182] Loaded profile config "calico-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-220636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v5wz4" [eddbfcf8-b326-4e4f-8774-a1caf34186c5] Pending
helpers_test.go:352: "netcat-cd4db9dbf-v5wz4" [eddbfcf8-b326-4e4f-8774-a1caf34186c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-v5wz4" [eddbfcf8-b326-4e4f-8774-a1caf34186c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004599235s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-220636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-220636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.509996129s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1101 10:42:57.793008  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/addons-720971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:20.785846  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/old-k8s-version-180313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:31.198089  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:35.033399  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:49.370220  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:49.376661  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:49.388122  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:49.410207  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:49.451483  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:49.533180  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:49.694682  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:50.016507  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:50.658761  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:51.940156  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:51.962446  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/functional-034342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m21.036427651s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-220636 "pgrep -a kubelet"
I1101 10:43:52.957555  287135 config.go:182] Loaded profile config "custom-flannel-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-220636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-29gff" [f0d764a8-ac18-4957-9898-336f44a4d976] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:43:54.501821  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-29gff" [f0d764a8-ac18-4957-9898-336f44a4d976] Running
E1101 10:43:59.624251  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003569115s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-220636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-220636 "pgrep -a kubelet"
I1101 10:44:16.835196  287135 config.go:182] Loaded profile config "enable-default-cni-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-220636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8j4nk" [7bdfc4f9-a89f-457f-8b4c-bbdd047f900c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8j4nk" [7bdfc4f9-a89f-457f-8b4c-bbdd047f900c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003461231s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.35818892s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-220636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1101 10:45:11.310645  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/default-k8s-diff-port-245904/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:32.366646  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:32.372993  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:32.384714  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:32.406152  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:32.447559  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:32.528951  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:32.690434  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:33.012560  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-220636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m25.437278714s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-rfrg7" [d0defb68-7b85-49f3-9e4f-a156b5938369] Running
E1101 10:45:33.653957  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:34.935940  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:45:37.497554  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003319368s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-220636 "pgrep -a kubelet"
I1101 10:45:39.472340  287135 config.go:182] Loaded profile config "flannel-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-220636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6h7wm" [6166b254-6c8a-4559-832b-dbe523dd661b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:45:42.619211  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/auto-220636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-6h7wm" [6166b254-6c8a-4559-832b-dbe523dd661b] Running
E1101 10:45:47.332288  287135 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/no-preload-170467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004076283s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-220636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-220636 "pgrep -a kubelet"
I1101 10:46:19.942033  287135 config.go:182] Loaded profile config "bridge-220636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-220636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mlrwh" [6472bd0e-020a-4710-8825-52577049ecd6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mlrwh" [6472bd0e-020a-4710-8825-52577049ecd6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006465562s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-220636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-220636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-812096 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-812096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-812096
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-416512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-416512
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-220636 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-220636" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21833-285274/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-197523
contexts:
- context:
cluster: pause-197523
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-197523
name: pause-197523
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-197523
user:
client-certificate: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.crt
client-key: /home/jenkins/minikube-integration/21833-285274/.minikube/profiles/pause-197523/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-220636

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-220636"

                                                
                                                
----------------------- debugLogs end: kubenet-220636 [took: 4.411666661s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-220636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-220636
--- SKIP: TestNetworkPlugins/group/kubenet (4.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-220636 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-220636" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-220636

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-220636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220636"

                                                
                                                
----------------------- debugLogs end: cilium-220636 [took: 4.585161329s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-220636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-220636
--- SKIP: TestNetworkPlugins/group/cilium (4.75s)

                                                
                                    
Copied to clipboard